id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
208356 | https://en.wikipedia.org/wiki/Text%20file | Text file | A text file (sometimes spelled textfile; an old alternative name is flat file) is a kind of computer file that is structured as a sequence of lines of electronic text. A text file exists stored as data within a computer file system.
In operating systems such as CP/M, where the operating system does not keep track of the file size in bytes, the end of a text file is denoted by placing one or more special characters, known as an end-of-file (EOF) marker, as padding after the last line in a text file. In modern operating systems such as DOS, Microsoft Windows and Unix-like systems, text files do not contain any special EOF character, because file systems on those operating systems keep track of the file size in bytes.
Some operating systems, such as Multics, Unix-like systems, CP/M, DOS, the classic Mac OS, and Windows, store text files as a sequence of bytes, with an end-of-line delimiter at the end of each line. Other operating systems, such as OpenVMS and OS/360 and its successors, have record-oriented filesystems, in which text files are stored as a sequence either of fixed-length records or of variable-length records with a record-length value in the record header.
"Text file" refers to a type of container, while plain text refers to a type of content.
At a generic level of description, there are two kinds of computer files: text files and binary files.
Data storage
Because of their simplicity, text files are commonly used for storage of information. They avoid some of the problems encountered with other file formats, such as endianness, padding bytes, or differences in the number of bytes in a machine word. Further, when data corruption occurs in a text file, it is often easier to recover and continue processing the remaining contents. A disadvantage of text files is that they usually have a low entropy, meaning that the information occupies more storage than is strictly necessary.
A simple text file may need no additional metadata (other than knowledge of its character set) to assist the reader in interpretation. A text file may contain no data at all, which is a case of zero-byte file.
Encoding
The ASCII character set is the most common compatible subset of character sets for English-language text files, and is generally assumed to be the default file format in many situations. It covers American English, but for the British pound sign, the euro sign, or characters used outside English, a richer character set must be used. In many systems, this is chosen based on the default locale setting on the computer it is read on. Prior to UTF-8, this was traditionally single-byte encodings (such as ISO-8859-1 through ISO-8859-16) for European languages and wide character encodings for Asian languages.
Because encodings necessarily have only a limited repertoire of characters, often very small, many are only usable to represent text in a limited subset of human languages. Unicode is an attempt to create a common standard for representing all known languages, and most known character sets are subsets of the very large Unicode character set. Although there are multiple character encodings available for Unicode, the most common is UTF-8, which has the advantage of being backwards-compatible with ASCII; that is, every ASCII text file is also a UTF-8 text file with identical meaning. UTF-8 also has the advantage that it is easily auto-detectable. Thus, a common operating mode of UTF-8 capable software, when opening files of unknown encoding, is to try UTF-8 first and fall back to a locale dependent legacy encoding when it definitely is not UTF-8.
Formats
On most operating systems, the name text file refers to a file format that allows only plain text content with very little formatting (e.g., no bold or italic types). Such files can be viewed and edited on text terminals or in simple text editors. Text files usually have the MIME type text/plain, usually with additional information indicating an encoding.
Microsoft Windows text files
DOS and Microsoft Windows use a common text file format, with each line of text separated by a two-character combination: carriage return (CR) and line feed (LF). It is common for the last line of text not to be terminated with a CR-LF marker, and many text editors (including Notepad) do not automatically insert one on the last line.
On Microsoft Windows operating systems, a file is regarded as a text file if the suffix of the name of the file (the "filename extension") is .txt. However, many other suffixes are used for text files with specific purposes. For example, source code for computer programs is usually kept in text files that have file name suffixes indicating the programming language in which the source is written.
Most Microsoft Windows text files use ANSI, OEM, Unicode or UTF-8 encoding. What Microsoft Windows terminology calls "ANSI encodings" are usually single-byte ISO/IEC 8859 encodings (i.e. ANSI in the Microsoft Notepad menus is really "System Code Page", non-Unicode, legacy encoding), except for in locales such as Chinese, Japanese and Korean that require double-byte character sets. ANSI encodings were traditionally used as default system locales within Microsoft Windows, before the transition to Unicode. By contrast, OEM encodings, also known as DOS code pages, were defined by IBM for use in the original IBM PC text mode display system. They typically include graphical and line-drawing characters common in DOS applications. "Unicode"-encoded Microsoft Windows text files contain text in UTF-16 Unicode Transformation Format. Such files normally begin with byte order mark (BOM), which communicates the endianness of the file content. Although UTF-8 does not suffer from endianness problems, many Microsoft Windows programs (i.e. Notepad) prepend the contents of UTF-8-encoded files with BOM, to differentiate UTF-8 encoding from other 8-bit encodings.
Unix text files
On Unix-like operating systems, text files format is precisely described: POSIX defines a text file as a file that contains characters organized into zero or more lines, where lines are sequences of zero or more non-newline characters plus a terminating newline character, normally LF.
Additionally, POSIX defines a as a text file whose characters are printable or space or backspace according to regional rules. This excludes most control characters, which are not printable.
Apple Macintosh text files
Prior to the advent of macOS, the classic Mac OS system regarded the content of a file (the data fork) to be a text file when its resource fork indicated that the type of the file was "TEXT". Lines of classic Mac OS text files are terminated with CR characters.
Being a Unix-like system, macOS uses Unix format for text files. Uniform Type Identifier (UTI) used for text files in macOS is "public.plain-text"; additional, more specific UTIs are: "public.utf8-plain-text" for utf-8-encoded text, "public.utf16-external-plain-text" and "public.utf16-plain-text" for utf-16-encoded text and "com.apple.traditional-mac-plain-text" for classic Mac OS text files.
Rendering
When opened by a text editor, human-readable content is presented to the user. This often consists of the file's plain text visible to the user. Depending on the application, control codes may be rendered either as literal instructions acted upon by the editor, or as visible escape characters that can be edited as plain text. Though there may be plain text in a text file, control characters within the file (especially the end-of-file character) can render the plain text unseen by a particular method.
| Technology | Data storage and memory | null |
208430 | https://en.wikipedia.org/wiki/Enceladus | Enceladus | Enceladus is the sixth-largest moon of Saturn and the 18th-largest in the Solar System. It is about in diameter, about a tenth of that of Saturn's largest moon, Titan. It is mostly covered by fresh, clean ice, making it one of the most reflective bodies of the Solar System. Consequently, its surface temperature at noon reaches only , far colder than a light-absorbing body would be. Despite its small size, Enceladus has a wide variety of surface features, ranging from old, heavily cratered regions to young, tectonically deformed terrain.
Enceladus was discovered on August 28, 1789, by William Herschel, but little was known about it until the two Voyager spacecrafts, Voyager 1 and Voyager 2, flew by Saturn in 1980 and 1981. In 2005, the spacecraft Cassini started multiple close flybys of Enceladus, revealing its surface and environment in greater detail. In particular, Cassini discovered water-rich plumes venting from the south polar region. Cryovolcanoes near the south pole shoot geyser-like jets of water vapor, molecular hydrogen, other volatiles, and solid material, including sodium chloride crystals and ice particles, into space, totaling about per second. More than 100 geysers have been identified. Some of the water vapor falls back as "snow"; the rest escapes and supplies most of the material making up Saturn's E ring. According to NASA scientists, the plumes are similar in composition to comets. In 2014, NASA reported that Cassini had found evidence for a large south polar subsurface ocean of liquid water with a thickness of around . The existence of Enceladus' subsurface ocean has since been mathematically modelled and replicated.
These observations of active cryoeruptions, along with the finding of escaping internal heat and very few (if any) impact craters in the south polar region, show that Enceladus is currently geologically active. Like many other satellites in the extensive systems of the giant planets, Enceladus participates in an orbital resonance. Its resonance with Dione excites its orbital eccentricity, which is damped by tidal forces, tidally heating its interior and driving the geological activity.
Cassini performed chemical analysis of Enceladus's plumes, finding evidence for hydrothermal activity, possibly driving complex chemistry. Ongoing research on Cassini data suggests that Enceladus's hydrothermal environment could be habitable to some of Earth's hydrothermal vent's microorganisms, and that plume-found methane could be produced by such organisms.
History
Discovery
Enceladus was discovered by William Herschel on August 28, 1789, during the first use of his new 40-foot telescope, then the largest in the world, at Observatory House in Slough, England. Its faint apparent magnitude (HV = +11.7) and its proximity to the much brighter Saturn and Saturn's rings make Enceladus difficult to observe from Earth with smaller telescopes. Like many satellites of Saturn discovered prior to the Space Age, Enceladus was first observed during a Saturnian equinox, when Earth is within the ring plane. At such times, the reduction in glare from the rings makes the moons easier to observe. Prior to the Voyager missions the view of Enceladus improved little from the dot first observed by Herschel. Only its orbital characteristics were known, with estimations of its mass, density and albedo.
Naming
Enceladus is named after the giant Enceladus of Greek mythology. The name, like the names of each of the first seven satellites of Saturn to be discovered, was suggested by William Herschel's son John Herschel in his 1847 publication Results of Astronomical Observations made at the Cape of Good Hope. He chose these names because Saturn, known in Greek mythology as Cronus, was the leader of the Titans.
Geological features on Enceladus are named by the International Astronomical Union (IAU) after characters and places from Richard Francis Burton's 1885 translation of The Book of One Thousand and One Nights. Impact craters are named after characters, whereas other feature types, such as fossae (long, narrow depressions), dorsa (ridges), planitiae (plains), sulci (long parallel grooves), and rupes (cliffs) are named after places. The IAU has officially named 85 features on Enceladus, most recently Samaria Rupes, formerly called Samaria Fossa.
Shape and size
Enceladus is a relatively small satellite composed of ice and rock. It is a scalene ellipsoid in shape; its diameters, calculated from images taken by Cassini's ISS (Imaging Science Subsystem) instrument, are between the sub- and anti-Saturnian poles, between the leading and trailing hemispheres, and between the north and south poles.
Enceladus is only one-seventh the diameter of Earth's Moon. It ranks sixth in both mass and size among the satellites of Saturn, after Titan (), Rhea (), Iapetus (), Dione () and Tethys ().
Orbit and rotation
Enceladus is one of the major inner satellites of Saturn along with Dione, Tethys, and Mimas. It orbits at from Saturn's center and from its cloud tops, between the orbits of Mimas and Tethys. It orbits Saturn every 32.9 hours, fast enough for its motion to be observed over a single night of observation. Enceladus is currently in a 2:1 mean-motion orbital resonance with Dione, completing two orbits around Saturn for every one orbit completed by Dione.
This resonance maintains Enceladus's orbital eccentricity (0.0047), which is known as a forced eccentricity. This non-zero eccentricity results in tidal deformation of Enceladus. The dissipated heat resulting from this deformation is the main heating source for Enceladus's geologic activity. Enceladus orbits within the densest part of Saturn's E ring, the outermost of its major rings, and is the main source of the ring's material composition.
Like most of Saturn's larger satellites, Enceladus rotates synchronously with its orbital period, keeping one face pointed toward Saturn. Unlike Earth's Moon, Enceladus does not appear to librate more than 1.5° about its spin axis. However, analysis of the shape of Enceladus suggests that at some point it was in a 1:4 forced secondary spin–orbit libration. This libration could have provided Enceladus with an additional heat source.
Source of the E ring
Plumes from Enceladus, which are similar in composition to comets, have been shown to be the source of the material in Saturn's E ring. The E ring is the widest and outermost ring of Saturn (except for the tenuous Phoebe ring). It is an extremely wide but diffuse disk of microscopic icy or dusty material distributed between the orbits of Mimas and Titan.
Mathematical models show that the E ring is unstable, with a lifespan between 10,000 and 1,000,000 years; therefore, particles composing it must be constantly replenished. Enceladus is orbiting inside the ring, at its narrowest but highest density point. In the 1980s, some astronomers suspected that Enceladus is the main source of particles for the ring. This hypothesis was confirmed by Cassini's first two close flybys in 2005.
The Cosmic Dust Analyzer (CDA) "detected a large increase in the number of particles near Enceladus", confirming it as the primary source for the E ring. Analysis of the CDA and INMS data suggest that the gas cloud Cassini flew through during the July encounter, and observed from a distance with its magnetometer and UVIS, was actually a water-rich cryovolcanic plume, originating from vents near the south pole.
Visual confirmation of venting came in November 2005, when Cassini imaged geyser-like jets of icy particles rising from Enceladus's south polar region. (Although the plume was imaged before, in January and February 2005, additional studies of the camera's response at high phase angles, when the Sun is almost behind Enceladus, and comparison with equivalent high-phase-angle images taken of other Saturnian satellites, were required before this could be confirmed.)
Geology
Surface features
Voyager 2 was the first spacecraft to observe Enceladus's surface in detail, in August 1981. Examination of the resulting highest-resolution imagery revealed at least five different types of terrain, including several regions of cratered terrain, regions of smooth (young) terrain, and lanes of ridged terrain often bordering the smooth areas. Extensive linear cracks and scarps were observed. Given the relative lack of craters on the smooth plains, these regions are probably less than a few hundred million years old.
Accordingly, Enceladus must have been recently active with "water volcanism" or other processes that renew the surface. The fresh, clean ice that dominates its surface makes Enceladus the most reflective body in the Solar System, with a visual geometric albedo of 1.38 and bolometric Bond albedo of . Because it reflects so much sunlight, its surface only reaches a mean noon temperature of , somewhat colder than other Saturnian satellites.
Observations during three flybys on February 17, March 9, and July 14, 2005, revealed Enceladus's surface features in much greater detail than the Voyager 2 observations. The smooth plains, which Voyager 2 had observed, resolved into relatively crater-free regions filled with numerous small ridges and scarps. Numerous fractures were found within the older, cratered terrain, suggesting that the surface has been subjected to extensive deformation since the craters were formed.
Some areas contain no craters, indicating major resurfacing events in the geologically recent past. There are fissures, plains, corrugated terrain and other crustal deformations. Several additional regions of young terrain were discovered in areas not well-imaged by either Voyager spacecraft, such as the bizarre terrain near the south pole. All of this indicates that Enceladus's interior is liquid today, even though it should have been frozen long ago.
Impact craters
Impact cratering is a common occurrence on many Solar System bodies. Much of Enceladus's surface is covered with craters at various densities and levels of degradation. This subdivision of cratered terrains on the basis of crater density (and thus surface age) suggests that Enceladus has been resurfaced in multiple stages.
Cassini observations provided a much closer look at the crater distribution and size, showing that many of Enceladus's craters are heavily degraded through viscous relaxation and fracturing. Viscous relaxation allows gravity, over geologic time scales, to deform craters and other topographic features formed in water ice, reducing the amount of topography over time. The rate at which this occurs is dependent on the temperature of the ice: warmer ice is easier to deform than colder, stiffer ice. Viscously relaxed craters tend to have domed floors, or are recognized as craters only by a raised, circular rim. Dunyazad crater is a prime example of a viscously relaxed crater on Enceladus, with a prominent domed floor.
Tectonic features
Voyager 2 found several types of tectonic features on Enceladus, including troughs, scarps, and belts of grooves and ridges. Results from Cassini suggest that tectonics is the dominant mode of deformation on Enceladus, including rifts, one of the more dramatic types of tectonic features that were noted. These canyons can be up to 200 km long, 5–10 km wide, and 1 km deep. Such features are geologically young, because they cut across other tectonic features and have sharp topographic relief with prominent outcrops along the cliff faces.
Evidence of tectonics on Enceladus is also derived from grooved terrain, consisting of lanes of curvilinear grooves and ridges. These bands, first discovered by Voyager 2, often separate smooth plains from cratered regions. Grooved terrains such as the Samarkand Sulci are reminiscent of grooved terrain on Ganymede. Unlike those seen on Ganymede, grooved topography on Enceladus is generally more complex. Rather than parallel sets of grooves, these lanes often appear as bands of crudely aligned, chevron-shaped features.
In other areas, these bands bow upwards with fractures and ridges running the length of the feature. Cassini observations of the Samarkand Sulci have revealed dark spots (125 and 750 m wide) located parallel to the narrow fractures. Currently, these spots are interpreted as collapse pits within these ridged plain belts.
In addition to deep fractures and grooved lanes, Enceladus has several other types of tectonic terrain. Many of these fractures are found in bands cutting across cratered terrain. These fractures probably propagate down only a few hundred meters into the crust. Many have probably been influenced during their formation by the weakened regolith produced by impact craters, often changing the strike of the propagating fracture.
Another example of tectonic features on Enceladus are the linear grooves first found by Voyager 2 and seen at a much higher resolution by Cassini. These linear grooves can be seen cutting across other terrain types, like the groove and ridge belts. Like the deep rifts, they are among the youngest features on Enceladus. However, some linear grooves have been softened like the craters nearby, suggesting that they are older. Ridges have also been observed on Enceladus, though not nearly to the extent as those seen on Europa. These ridges are relatively limited in extent and are up to one kilometer tall. One-kilometer high domes have also been observed. Given the level of resurfacing found on Enceladus, it is clear that tectonic movement has been an important driver of geology for much of its history.
Smooth plains
Two regions of smooth plains were observed by Voyager 2. They generally have low relief and have far fewer craters than in the cratered terrains, indicating a relatively young surface age. In one of the smooth plain regions, Sarandib Planitia, no impact craters were visible down to the limit of resolution. Another region of smooth plains to the southwest of Sarandib is criss-crossed by several troughs and scarps. Cassini has since viewed these smooth plains regions, like Sarandib Planitia and Diyar Planitia at much higher resolution. Cassini images show these regions filled with low-relief ridges and fractures, probably caused by shear deformation. The high-resolution images of Sarandib Planitia revealed a number of small impact craters, which allow for an estimate of the surface age, either 170 million years or 3.7 billion years, depending on assumed impactor population.
The expanded surface coverage provided by Cassini has allowed for the identification of additional regions of smooth plains, particularly on Enceladus's leading hemisphere (the side of Enceladus that faces the direction of motion as it orbits Saturn). Rather than being covered in low-relief ridges, this region is covered in numerous criss-crossing sets of troughs and ridges, similar to the deformation seen in the south polar region. This area is on the opposite side of Enceladus from Sarandib and Diyar Planitiae, suggesting that the placement of these regions is influenced by Saturn's tides on Enceladus.
South polar region
Images taken by Cassini during the flyby on July 14, 2005, revealed a distinctive, tectonically deformed region surrounding Enceladus's south pole. This area, reaching as far north as 60° south latitude, is covered in tectonic fractures and ridges. The area has few sizable impact craters, suggesting that it is the youngest surface on Enceladus and on any of the mid-sized icy satellites. Modeling of the cratering rate suggests that some regions of the south polar terrain are possibly as young as 500,000 years or less.
Near the center of this terrain are four fractures bounded by ridges, unofficially called "tiger stripes". They appear to be the youngest features in this region and are surrounded by mint-green-colored (in false color, UV–green–near IR images), coarse-grained water ice, seen elsewhere on the surface within outcrops and fracture walls. Here the "blue" ice is on a flat surface, indicating that the region is young enough not to have been coated by fine-grained water ice from the E ring.
Results from the visual and infrared mapping spectrometer (VIMS) instrument suggest that the green-colored material surrounding the tiger stripes is chemically distinct from the rest of the surface of Enceladus. VIMS detected crystalline water ice in the stripes, suggesting that they are quite young (likely less than 1,000 years old) or the surface ice has been thermally altered in the recent past. VIMS also detected simple organic (carbon-containing) compounds in the tiger stripes, chemistry not found anywhere else on Enceladus thus far.
One of these areas of "blue" ice in the south polar region was observed at high resolution during the July 14, 2005, flyby, revealing an area of extreme tectonic deformation and blocky terrain, with some areas covered in boulders 10–100 m across.
The boundary of the south polar region is marked by a pattern of parallel, Y- and V-shaped ridges and valleys. The shape, orientation, and location of these features suggest they are caused by changes in the overall shape of Enceladus. As of 2006 there were two theories for what could cause such a shift in shape: the orbit of Enceladus may have migrated inward, leading to an increase in Enceladus's rotation rate. Such a shift would lead to a more oblate shape; or a rising mass of warm, low-density material in Enceladus's interior may have led to a shift in the position of the current south polar terrain from Enceladus's southern mid-latitudes to its south pole.
Consequently, the moon's ellipsoid shape would have adjusted to match the new orientation. One problem of the polar flattening hypothesis is that both polar regions should have similar tectonic deformation histories. However, the north polar region is densely cratered, and has a much older surface age than the south pole. Thickness variations in Enceladus's lithosphere is one explanation for this discrepancy. Variations in lithospheric thickness are supported by the correlation between the Y-shaped discontinuities and the V-shaped cusps along the south polar terrain margin and the relative surface age of the adjacent non-south polar terrain regions. The Y-shaped discontinuities, and the north–south trending tension fractures into which they lead, are correlated with younger terrain with presumably thinner lithospheres. The V-shaped cusps are adjacent to older, more heavily cratered terrains.
South polar plumes
Following Voyager's encounters with Enceladus in the early 1980s, scientists postulated it to be geologically active based on its young, reflective surface and location near the core of the E ring. Based on the connection between Enceladus and the E ring, scientists suspected that Enceladus was the source of material in the E ring, perhaps through venting of water vapor. The first Cassini sighting of a plume of icy particles above Enceladus's south pole came from the Imaging Science Subsystem (ISS) images taken in January and February 2005, though the possibility of a camera artifact delayed an official announcement.
Data from the magnetometer instrument during the February 17, 2005, encounter provided evidence for a planetary atmosphere. The magnetometer observed a deflection or "draping" of the magnetic field, consistent with local ionization of neutral gas. During the two following encounters, the magnetometer team determined that gases in Enceladus's atmosphere are concentrated over the south polar region, with atmospheric density away from the pole being much lower. Unlike the magnetometer, the Ultraviolet Imaging Spectrograph failed to detect an atmosphere above Enceladus during the February encounter when it looked over the equatorial region, but did detect water vapor during an occultation over the south polar region during the July encounter.
Cassini flew through this gas cloud on a few encounters, allowing instruments such as the ion and neutral mass spectrometer (INMS) and the cosmic dust analyzer (CDA) to directly sample the plume. (See 'Composition' section.) The November 2005 images showed the plume's fine structure, revealing numerous jets (perhaps issuing from numerous distinct vents) within a larger, faint component extending out nearly from the surface. The particles have a bulk velocity of , and a maximum velocity of . Cassini's UVIS later observed gas jets coinciding with the dust jets seen by ISS during a non-targeted encounter with Enceladus in October 2007.
The combined analysis of imaging, mass spectrometry, and magnetospheric data suggests that the observed south polar plume emanates from pressurized subsurface chambers, similar to Earth's geysers or fumaroles. Fumaroles are probably the closer analogy, since periodic or episodic emission is an inherent property of geysers. The plumes of Enceladus were observed to be continuous to within a factor of a few. The mechanism that drives and sustains the eruptions is thought to be tidal heating.
The intensity of the eruption of the south polar jets varies significantly as a function of the position of Enceladus in its orbit. The plumes are about four times brighter when Enceladus is at apoapsis (the point in its orbit most distant from Saturn) than when it is at periapsis. This is consistent with geophysical calculations which predict the south polar fissures are under compression near periapsis, pushing them shut, and under tension near apoapsis, pulling them open. Strike-slip tectonics may also drive localized extension along alternating (left- and right- lateral) transtensional zones (e.g., pull-apart basins) over the Tiger Stripes, thereby regulating jet activity within these regions.
Much of the plume activity consists of broad curtain-like eruptions. Optical illusions from a combination of viewing direction and local fracture geometry previously made the plumes look like discrete jets.
The extent to which cryovolcanism really occurs is a subject of some debate. At Enceladus, it appears that cryovolcanism occurs because water-filled cracks are periodically exposed to vacuum, the cracks being opened and closed by tidal stresses.
Internal structure
Before the Cassini mission, little was known about the interior of Enceladus. However, flybys by Cassini provided information for models of Enceladus's interior, including a better determination of the mass and shape, high-resolution observations of the surface, and new insights on the interior.
Initial mass estimates from the Voyager program missions suggested that Enceladus was composed almost entirely of water ice. However, based on the effects of Enceladus's gravity on Cassini, its mass was determined to be much higher than previously thought, yielding a density of 1.61 g/cm3. This density is higher than those of Saturn's other mid-sized icy satellites, indicating that Enceladus contains a greater percentage of silicates and iron.
suggested that Iapetus and the other icy satellites of Saturn formed relatively quickly after the formation of the Saturnian subnebula, and thus were rich in short-lived radionuclides. These radionuclides, like aluminium-26 and iron-60, have short half-lives and would produce interior heating relatively quickly. Without the short-lived variety, Enceladus's complement of long-lived radionuclides would not have been enough to prevent rapid freezing of the interior, even with Enceladus's comparatively high rock–mass fraction, given its small size.
Given Enceladus's relatively high rock–mass fraction, the proposed enhancement in 26Al and 60Fe would result in a differentiated body, with an icy mantle and a rocky core. Subsequent radioactive and tidal heating would raise the temperature of the core to 1,000 K, enough to melt the inner mantle. For Enceladus to still be active, part of the core must have also melted, forming magma chambers that would flex under the strain of Saturn's tides. Tidal heating, such as from the resonance with Dione or from libration, would then have sustained these hot spots in the core and would power the current geological activity.
In addition to its mass and modeled geochemistry, researchers have also examined Enceladus's shape to determine if it is differentiated. used limb measurements to determine that its shape, assuming hydrostatic equilibrium, is consistent with an undifferentiated interior, in contradiction to the geological and geochemical evidence. However, the current shape also supports the possibility that Enceladus is not in hydrostatic equilibrium, and may have rotated faster at some point in the recent past (with a differentiated interior). Gravity measurements by Cassini show that the density of the core is low, indicating that the core contains water in addition to silicates.
Subsurface ocean
Evidence of liquid water on Enceladus began to accumulate in 2005, when scientists observed plumes containing water vapor spewing from its south polar surface, with jets moving 250 kg of water vapor every second at up to into space. Soon after, in 2006 it was determined that Enceladus's plumes are the source of Saturn's E Ring. The sources of salty particles are uniformly distributed along the tiger stripes, whereas sources of "fresh" particles are closely related to the high-speed gas jets. The "salty" particles are heavier and mostly fall back to the surface, whereas the fast "fresh" particles escape to the E ring, explaining its salt-poor composition of 0.5–2% of sodium salts by mass.
Gravimetric data from Cassini'''s December 2010 flybys showed that Enceladus likely has a liquid water ocean beneath its frozen surface, but at the time it was thought the subsurface ocean was limited to the south pole. The top of the ocean probably lies beneath a thick ice shelf. The ocean may be deep at the south pole.
Measurements of Enceladus's "wobble" as it orbits Saturn—called libration—suggests that the entire icy crust is detached from the rocky core and therefore that a global ocean is present beneath the surface. The amount of libration (0.120° ± 0.014°) implies that this global ocean is about deep. For comparison, Earth's ocean has an average depth of 3.7 kilometers.
Composition
The Cassini spacecraft flew through the southern plumes on several occasions to sample and analyze its composition. As of 2019, the data gathered is still being analyzed and interpreted. The plumes' salty composition (-Na, -Cl, -CO3) indicates that the source is a salty subsurface ocean.
The INMS instrument detected mostly water vapor, as well as traces of molecular nitrogen, carbon dioxide, and trace amounts of simple hydrocarbons such as methane, propane, acetylene and formaldehyde. The plumes' composition, as measured by the INMS, is similar to that seen at most comets. Cassini also found traces of simple organic compounds in some dust grains, as well as larger organics such as benzene (), and complex macromolecular organics as large as 200 atomic mass units, and at least 15 carbon atoms in size.
The mass spectrometer detected molecular hydrogen (H2) which was in "thermodynamic disequilibrium" with the other components, and found traces of ammonia ().
A model suggests that Enceladus's salty ocean (-Na, -Cl, -CO3) has an alkaline pH of 11 to 12. The high pH is interpreted to be a consequence of serpentinization of chondritic rock that leads to the generation of H2, a geochemical source of energy that could support both abiotic and biological synthesis of organic molecules such as those that have been detected in Enceladus's plumes.
Further analysis in 2019 was done of the spectral characteristics of ice grains in Enceladus's erupting plumes. The study found that nitrogen-bearing and oxygen-bearing amines were likely present, with significant implications for the availability of amino acids in the internal ocean. The researchers suggested that the compounds on Enceladus could be precursors for "biologically relevant organic compounds".
Possible heat sources
During the flyby of July 14, 2005, the Composite Infrared Spectrometer (CIRS) found a warm region near the south pole. Temperatures in this region ranged from 85 to 90 K, with small areas showing as high as , much too warm to be explained by solar heating, indicating that parts of the south polar region are heated from the interior of Enceladus. The presence of a subsurface ocean under the south polar region is now accepted, but it cannot explain the source of the heat, with an estimated heat flux of 200 mW/m2, which is about 10 times higher than that from radiogenic heating alone.
Several explanations for the observed elevated temperatures and the resulting plumes have been proposed, including venting from a subsurface reservoir of liquid water, sublimation of ice, decompression and dissociation of clathrates, and shear heating, but a complete explanation of all the heat sources causing the observed thermal power output of Enceladus has not yet been settled.
Heating in Enceladus has occurred through various mechanisms ever since its formation. Radioactive decay in its core may have initially heated it, giving it a warm core and a subsurface ocean, which is now kept above freezing through unidentified mechanisms. Geophysical models indicate that tidal heating is a main heat source, perhaps aided by radioactive decay and some heat-producing chemical reactions. A 2007 study predicted the internal heat of Enceladus, if generated by tidal forces, could be no greater than 1.1 gigawatts, but data from Cassini's infrared spectrometer of the south polar terrain over 16 months, indicate that the internal heat generated power is about 4.7 gigawatts, and suggest that it is in thermal equilibrium.
The observed power output of 4.7 gigawatts is challenging to explain from tidal heating alone, so the main source of heat remains a mystery. Most scientists think the observed heat flux of Enceladus is not enough to maintain the subsurface ocean, and therefore any subsurface ocean must be a remnant of a period of higher eccentricity and tidal heating, or the heat is produced through another mechanism.
Tidal heating
Tidal heating occurs through the tidal friction processes: orbital and rotational energy are dissipated as heat in the crust of an object. In addition, to the extent that tides produce heat along fractures, libration may affect the magnitude and distribution of such tidal shear heating. Tidal dissipation of Enceladus's ice crust is significant because Enceladus has a subsurface ocean. A computer simulation that used data from Cassini was published in November 2017, and it indicates that friction heat from the sliding rock fragments within the permeable and fragmented core of Enceladus could keep its underground ocean warm for up to billions of years. It is thought that if Enceladus had a more eccentric orbit in the past, the enhanced tidal forces could be sufficient to maintain a subsurface ocean, such that a periodic enhancement in eccentricity could maintain a subsurface ocean that periodically changes in size.
A 2016 analysis claimed that "a model of the tiger stripes as tidally flexed slots that puncture the ice shell can simultaneously explain the persistence of the eruptions through the tidal cycle, the phase lag, and the total power output of the tiger stripe terrain, while suggesting that eruptions are maintained over geological timescales." Previous models suggest that resonant perturbations of Dione could provide the necessary periodic eccentricity changes to maintain the subsurface ocean of Enceladus, if the ocean contains a substantial amount of ammonia. The surface of Enceladus indicates that the entire moon has experienced periods of enhanced heat flux in the past.
Radioactive heating
The "hot start" model of heating suggests Enceladus began as ice and rock that contained rapidly decaying short-lived radioactive isotopes of aluminium, iron and manganese. Enormous amounts of heat were then produced as these isotopes decayed for about 7 million years, resulting in the consolidation of rocky material at the core surrounded by a shell of ice. Although the heat from radioactivity would decrease over time, the combination of radioactivity and tidal forces from Saturn's gravitational tug could prevent the subsurface ocean from freezing.
The present-day radiogenic heating rate is 3.2 ergs/s (or 0.32 gigawatts), assuming Enceladus has a composition of ice, iron and silicate materials. Heating from long-lived radioactive isotopes uranium-238, uranium-235, thorium-232 and potassium-40 inside Enceladus would add 0.3 gigawatts to the observed heat flux. The presence of Enceladus's regionally thick subsurface ocean suggests a heat flux ≈10 times higher than that from radiogenic heating in the silicate core.
Chemical factors
Because no ammonia was initially found in the vented material by INMS or UVIS, which could act as an antifreeze, it was thought such a heated, pressurized chamber would consist of nearly pure liquid water with a temperature of at least , because pure water requires more energy to melt.
In July 2009 it was announced that traces of ammonia had been found in the plumes during flybys in July and October 2008. Reducing the freezing point of water with ammonia would also allow for outgassing and higher gas pressure, and less heat required to power the water plumes. The subsurface layer heating the surface water ice could be an ammonia–water slurry at temperatures as low as , and thus less energy is required to produce the plume activity. However, the observed 4.7 gigawatts heat flux is enough to power the cryovolcanism without the presence of ammonia.
Origin
Mimas–Enceladus paradox
Mimas, the innermost of the round moons of Saturn and directly interior to Enceladus, is a geologically dead body, even though it should experience stronger tidal forces than Enceladus. This apparent paradox can be explained in part by temperature-dependent properties of water ice (the main constituent of the interiors of Mimas and Enceladus). The tidal heating per unit mass is given by the formula
where ρ is the (mass) density of the satellite, n is its mean orbital motion, r is the satellite's radius, e is the orbital eccentricity of the satellite, μ is the shear modulus and Q is the dimensionless dissipation factor. For a same-temperature approximation, the expected value of qtid for Mimas is about 40 times that of Enceladus. However, the material parameters μ and Q are temperature dependent. At high temperatures (close to the melting point), μ and Q are low, so tidal heating is high. Modeling suggests that for Enceladus, both a 'basic' low-energy thermal state with little internal temperature gradient, and an 'excited' high-energy thermal state with a significant temperature gradient, and consequent convection (endogenic geologic activity), once established, would be stable.
For Mimas, only a low-energy state is expected to be stable, despite its being closer to Saturn. So the model predicts a low-internal-temperature state for Mimas (values of μ and Q are high) but a possible higher-temperature state for Enceladus (values of μ and Q are low). Additional historical information is needed to explain how Enceladus first entered the high-energy state (e.g. more radiogenic heating or a more eccentric orbit in the past).
The significantly higher density of Enceladus relative to Mimas (1.61 vs. 1.15 g/cm3), implying a larger content of rock and more radiogenic heating in its early history, has also been cited as an important factor in resolving the Mimas paradox.
It has been suggested that for an icy satellite the size of Mimas or Enceladus to enter an 'excited state' of tidal heating and convection, it would need to enter an orbital resonance before it lost too much of its primordial internal heat. Because Mimas, being smaller, would cool more rapidly than Enceladus, its window of opportunity for initiating orbital resonance-driven convection would have been considerably shorter.
Proto-Enceladus hypothesis
Enceladus is losing mass at a rate of 200 kg/second. If mass loss at this rate continued for 4.5 Gyr, the satellite would have lost approximately 30% of its initial mass. A similar value is obtained by assuming that the initial densities of Enceladus and Mimas were equal. It suggests that tectonics in the south polar region is probably mainly related to subsidence and associated subduction caused by the process of mass loss.
Date of formation
In 2016, a study of how the orbits of Saturn's moons should have changed due to tidal effects suggested that all of Saturn's satellites inward of Titan, including Enceladus (whose geologic activity was used to derive the strength of tidal effects on Saturn's satellites), may have formed as little as 100 million years ago.
A later study from 2019 estimated that the ocean is around one billion years old.
Potential habitability
Enceladus ejects plumes of salted water laced with grains of silica-rich sand, nitrogen (in ammonia), and organic molecules, including trace amounts of simple hydrocarbons such as methane (), propane (), acetylene () and formaldehyde (), which are carbon-bearing molecules. This indicates that hydrothermal activity —an energy source— may be at work in Enceladus's subsurface ocean. Models indicate that the large rocky core is porous, allowing water to flow through it, transferring heat and chemicals. It was confirmed by observations and other research. Molecular hydrogen (), a geochemical source of energy that can be metabolized by methanogen microbes to provide energy for life, could be present if, as models suggest, Enceladus's salty ocean has an alkaline pH from serpentinization of chondritic rock.
The presence of an internal global salty ocean with an aquatic environment supported by global ocean circulation patterns, with an energy source and complex organic compounds in contact with Enceladus's rocky core, may advance the study of astrobiology and the study of potentially habitable environments for microbial extraterrestrial life. Geochemical modeling results concerning not-yet-detected phosphorus indicate the moon meets potential abiogenesis-requirements. However, phosphates have been detected from a cryovolcanic plume detected by Cassini and is discussed in a paper in the June 14, 2023, issue of Nature entitled "Detection of Phosphates Originating From Enceladus's Ocean".
The presence of a wide range of organic compounds and ammonia indicates their source may be similar to the water/rock reactions known to occur on Earth and that are known to support life. Therefore, several robotic missions have been proposed to further explore Enceladus and assess its habitability. Some of the proposed missions are: Journey to Enceladus and Titan (JET), Enceladus Explorer (En-Ex), Enceladus Life Finder (ELF), Life Investigation For Enceladus (LIFE), and Enceladus Life Signatures and Habitability (ELSAH).
In June 2023, astronomers reported that the presence of phosphates on Enceladus has been detected, completing the discovery of all the basic chemical ingredients for life on the moon.
On December 14, 2023, astronomers reported the first time discovery, in the plumes of Enceladus, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life."
Hydrothermal vents
On April 13, 2017, NASA announced the discovery of possible hydrothermal activity on Enceladus's sub-surface ocean floor. In 2015, the Cassini probe made a close fly-by of Enceladus's south pole, flying within of the surface, as well as through a plume in the process. A mass spectrometer on the craft detected molecular hydrogen (H2) from the plume, and after months of analysis, the conclusion was made that the hydrogen was most likely the result of hydrothermal activity beneath the surface. It has been speculated that such activity could be a potential oasis of habitability.
The presence of ample hydrogen in Enceladus's ocean means that microbes – if any exist there – could use it to obtain energy by combining the hydrogen with carbon dioxide dissolved in the water. The chemical reaction is known as "methanogenesis" because it produces methane as a byproduct, and is at the root of the tree of life on Earth, the birthplace of all life that is known to exist.
Exploration
Voyager missions
The two Voyager spacecraft made the first close-up images of Enceladus. Voyager 1 was the first to fly past Enceladus, at a distance of 202,000 km on November 12, 1980. Images acquired from this distance had very poor spatial resolution, but revealed a highly reflective surface devoid of impact craters, indicating a youthful surface. Voyager 1 also confirmed that Enceladus was embedded in the densest part of Saturn's diffuse E ring. Combined with the apparent youthful appearance of the surface, Voyager scientists suggested that the E ring consisted of particles vented from Enceladus's surface. In 2017, a reprocessing of departure images from the probe revealed a possible precovery image of Enceladus' plumes.Voyager 2 passed closer to Enceladus (87,010 km) on August 26, 1981, allowing higher-resolution images to be obtained. These images showed a young surface. They also revealed a surface with different regions with vastly different surface ages, with a heavily cratered mid- to high-northern latitude region, and a lightly cratered region closer to the equator. This geologic diversity contrasts with the ancient, heavily cratered surface of Mimas, another moon of Saturn slightly smaller than Enceladus. The geologically youthful terrains came as a great surprise to the scientific community, because no theory was then able to predict that such a small (and cold, compared to Jupiter's highly active moon Io) celestial body could bear signs of such activity.
Cassini
The answers to many remaining mysteries of Enceladus had to wait until the arrival of the Cassini spacecraft on July 1, 2004, when it entered orbit around Saturn. Given the results from the Voyager 2 images, Enceladus was considered a priority target by the Cassini mission planners, and several targeted flybys within 1,500 km of the surface were planned as well as numerous, "non-targeted" opportunities within 100,000 km of Enceladus. The flybys have yielded significant information concerning Enceladus's surface, as well as the discovery of water vapor with traces of simple hydrocarbons venting from the geologically active south polar region.
These discoveries prompted the adjustment of Cassini's flight plan to allow closer flybys of Enceladus, including an encounter in March 2008 that took it to within 48 km of the surface. Cassini's extended mission included seven close flybys of Enceladus between July 2008 and July 2010, including two passes at only 50 km in the later half of 2008. Cassini performed a flyby on October 28, 2015, passing as close as and through a plume. Confirmation of molecular hydrogen () would be an independent line of evidence that hydrothermal activity is taking place in the Enceladus seafloor, increasing its habitability.Cassini has provided strong evidence that Enceladus has an ocean with an energy source, nutrients and organic molecules, making Enceladus one of the best places for the study of potentially habitable environments for extraterrestrial life.
On December 14, 2023, astronomers reported the first time discovery, in the plumes of Enceladus, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life."
Proposed mission concepts
The discoveries Cassini made at Enceladus have prompted studies into follow-up mission concepts, including a probe flyby (Journey to Enceladus and Titan or JET) to analyze plume contents in situ, a lander by the German Aerospace Center to study the habitability potential of its subsurface ocean (Enceladus Explorer), and two astrobiology-oriented mission concepts (the Enceladus Life Finder and Life Investigation For Enceladus (LIFE)).
The European Space Agency (ESA) was assessing concepts in 2008 to send a probe to Enceladus in a mission to be combined with studies of Titan: Titan Saturn System Mission (TSSM). TSSM was a joint NASA/ESA flagship-class proposal for exploration of Saturn's moons, with a focus on Enceladus, and it was competing against the Europa Jupiter System Mission (EJSM) proposal for funding. In February 2009, it was announced that NASA/ESA had given the EJSM mission priority ahead of TSSM, although TSSM will continue to be studied and evaluated.
In November 2017, Russian billionaire Yuri Milner expressed interest in funding a "low-cost, privately funded mission to Enceladus which can be launched relatively soon." In September 2018, NASA and the Breakthrough Initiatives, founded by Milner, signed a cooperation agreement for the mission's initial concept phase. The spacecraft would be low-cost, low mass, and would be launched at high speed on an affordable rocket. The spacecraft would be directed to perform a single flyby through Enceladus' plumes in order to sample and analyze its content for biosignatures. NASA provided scientific and technical expertise through various reviews, from March 2019 to December 2019.
In 2022, the Planetary Science Decadal Survey by the National Academy of Sciences recommended that NASA prioritize its newest probe concept, the Enceladus Orbilander, as a Flagship-class mission, alongside its newest concepts for a Mars sample-return mission and the Uranus Orbiter and Probe. The Enceladus Orbilander would be launched on a similarly affordable rocket, but would cost about $5 billion, and be designed to endure eighteen months in orbit inspecting Enceladus' plumes before landing and spending two Earth years conducting surface astrobiology research.
| Physical sciences | Solar System | null |
208612 | https://en.wikipedia.org/wiki/Rattan | Rattan | Rattan, also spelled ratan (from Malay: rotan), is the name for roughly 600 species of Old World climbing palms belonging to subfamily Calamoideae. The greatest diversity of rattan palm species and genera are in the closed-canopy old-growth tropical forests of Southeast Asia, though they can also be found in other parts of tropical Asia and Africa. Most rattan palms are ecologically considered lianas due to their climbing habits, unlike other palm species. A few species also have tree-like or shrub-like habits.
Around 20% of rattan palm species are economically important and are traditionally used in Southeast Asia in producing wickerwork furniture, baskets, canes, woven mats, cordage, and other handicrafts. Rattan canes are one of the world's most valuable non-timber forest products. Some species of rattan also have edible scaly fruit and heart of palm. Despite increasing attempts in the last 30 years at commercial cultivation, almost all rattan products still come from wild-harvested plants. Rattan supplies are now rapidly threatened due to deforestation and overexploitation. Rattan were also historically known as Manila cane or Malacca cane, based on their trade origins, as well as numerous other trade names for individual species.
Description
Most rattan palms are classified ecologically as lianas because most mature rattan palms have a vine-like habit, scrambling through and over other vegetation. However, they are different from true woody lianas in several ways. Because rattans are palms, they do not branch and they rarely develop new root structures upon contact of the stem with soil. They are monocots, and thus, do not exhibit secondary growth. This means the diameter of the rattan stem is always constant: juvenile rattan palms have the same width as when adult, usually around in diameter, with long internodes between the leaves. This also means juvenile rattan palms are rigid enough to remain free-standing, unlike true lianas which always need structural support, even when young. Many rattans also have spines which act as hooks to aid climbing over other plants, and to deter herbivores. The spines also give rattans the ability to climb wide-diameter trees, unlike other vines which use tendrils or twining which can only climb narrower supports. Rattans have been known to grow up to hundreds of metres long.
A few species of rattans are non-climbing. These range from free-standing tree-like species (like Calamus dumetosa) to acaulescent shrub-like species with short subterranean stems (like Calamus pygmaeus).
Rattans can also be solitary (single-stemmed), clustering (clump-forming), or both. Solitary rattan species grow into a single stem. Clustering rattan, on the other hand, develop clumps of up to 50 stems via suckers, similar to bamboo and bananas. These clusters can produce new stems continually as individual stems die. The impact of harvesting is much greater in solitary species, since the whole plant dies when harvested. An example of a commercially important single-stemmed species is Calamus manan. Clustering species, on the other hand, have more potential to become sustainable if the rate of harvesting does not exceed the rate of stem replacement via vegetative reproduction.
Rattans display two types of flowering: hapaxanthy and pleonanthy. All the species of the genera Korthalsia, Laccosperma, Plectocomia, Plectocomiopsis, and Myrialepis are hapaxanthic; as well as a few species of Calamus. This means they only flower and fruit once then die. All other rattan species are pleonanthic, being able to flower and fruit continually. Most commercially harvested species are pleonanthic, because hapaxanthic rattans tend to have soft piths making them unsuitable for bending.
Taxonomy
Calamoideae includes tree palms such as Raffia (raphia) and Metroxylon (sago palm) and shrub palms such as Salacca (salak) (Uhl & Dransfield 1987 Genera Palmarum). The climbing habit in palms is not restricted to Calamoideae, but has also evolved in three other evolutionary lines—tribes Cocoseae (Desmoncus with c. 7–10 species in the New World tropics) and Areceae (Dypsis scandens in Madagascar) in subfamily Arecoideae, and tribe Hyophorbeae (climbing species of the large genus Chamaedorea in Central America) in subfamily Ceroxyloideae. They do not have spinose stems and climb by means of their reflexed terminal leaflets. Of these only Desmoncus spp. furnish stems of sufficiently good quality to be used as rattan cane substitutes.
There are 13 different genera of rattans that include around 600 species. Some of the species in these "rattan genera" have a different habit and do not climb, they are shrubby palms of the forest undergrowth; nevertheless they are close relatives to species that are climbers and they are hence included in the same genera. The largest rattan genus is Calamus, distributed in Asia except for one species represented in Africa. From the remaining rattan genera, Korthalsia, Plectocomia, Plectocomiopsis, and Myrialepis are centered in Southeast Asia with outliers eastwards and northwards; and three are endemic to Africa: Laccosperma (syn. Ancistrophyllum), Eremospatha and Oncocalamus.
The rattan genera and their distribution (Uhl & Dransfield 1987 Genera Palmarum, Dransfield 1992):
In Uhl & Dransfield (1987 Genera Palmarum, 2ºed. 2008), and also Dransfield & Manokaran (1993), a great deal of basic introductory information is available.
Available rattan floras and monographs by region (2002):
Uses by taxon.
The major commercial species of rattan canes as identified for Asia by Dransfield and Manokaran (1993) and for Africa, by Tuley (1995) and Sunderland (1999) (Desmoncus not treated here):
Utilized Calamus species canes:
Other traditional uses of rattans by species:
Etymology
The name "rattan" is first attested in English in the 1650s. It is derived from the Malay name rotan. Probably ultimately from rautan (from raut, "to trim" or "to pare").
Ecology
Many rattan species also form mutualistic relationships with ant species. They provide ant shelters (myrmecodomatia) like hollow spines, funnel-shaped leaves, or leaf sheath extensions (ochreae). The rattans in turn, gain protection from herbivores.
Conservation
Rattans are threatened with overexploitation, as harvesters are cutting stems too young and reducing their ability to resprout. Unsustainable harvesting of rattan can lead to forest degradation, affecting overall forest ecosystem services. Processing can also be polluting. The use of toxic chemicals and petrol in the processing of rattan affects soil, air and water resources, and also ultimately people's health. Meanwhile, the conventional method of rattan production is threatening the plant's long-term supply, and the income of workers.
Rattans also exhibit rapid population growths in disturbed forest edges due to higher light availability than in the closed-canopy old-growth tropical forests. Although this can mean increased rattan abundance for economic exploitation, it can also be problematic in long-term conservation efforts.
Rattan harvesting from the wild in most rattan-producing countries requires permits. These include the Philippines, Sri Lanka, India, Malaysia, Laos, Ghana, and Cameroon. In addition, the Philippines also imposes an annual allowable cut in an effort to conserve rattan resources. Rattan cultivation (both monoculture and intercropping) is also being researched and pioneered in some countries, though it is still a young industry and only constitutes a minority of the rattan resources harvested annually.
Uses
In forests where rattan grows, its economic value can play a crucial role in conservation efforts. By offering an alternative source of income, rattan harvesting can deter loggers from engaging in timber logging. Harvesting rattan canes is simpler and requires less sophisticated tools compared to logging operations. Furthermore, rattan grows rapidly, which facilitates quicker replenishment compared to tropical wood species.This economic incentive supports forest maintenance by providing a profitable crop that complements rather than competes with trees. However, the long-term profitability and utility of rattan compared to other alternatives remain subjects of ongoing evaluation and study.
Cleaned rattan stems with the leaf sheaths removed are superficially similar to bamboo. Unlike bamboo, rattan stems are not hollow. Most (70%) of the world's rattan population exists in Indonesia, distributed among the islands Borneo, Sulawesi, and Sumbawa. The rest of the world's supply comes from the Philippines, Sri Lanka, Malaysia, Bangladesh and Assam, India.
Food source
Some rattan fruits are edible, with a sour taste akin to citrus. The fruit of some rattans exudes a red resin called dragon's blood; this resin was thought to have medicinal properties in antiquity and was used as a dye for violins, among other things. The resin normally results in a wood with a light peach hue.
The stem tips are rich in starch, and can be eaten raw or roasted. Long stems can be cut to obtain potable water. The palm heart can also be eaten raw or cooked.
Medicinal potential
In early 2010, scientists in Italy announced that rattan wood would be used in a new "wood to bone" process for the production of artificial bone. The process takes small pieces of rattan and places them in a furnace. Calcium and carbon are added. The wood is then further heated under intense pressure in another oven-like machine, and a phosphate solution is introduced. This process produces almost an exact replica of bone material. The process takes about 10 days. At the time of the announcement the bone was being tested in sheep, and there had been no signs of rejection. Particles from the sheep's bodies have migrated to the "wood bone" and formed long, continuous bones. The new bone-from-wood programme is being funded by the European Union. By 2023, experimental implants into humans were taking place.
Rattan chair
Rattans are extensively used for making baskets and furniture. When cut into sections, rattan can be used as wood to make furniture. Rattan accepts paints and stains like many other kinds of wood, so it is available in many colours, and it can be worked into many styles. Moreover, the inner core can be separated and worked into wicker. A typical braiding pattern is called Wiener Geflecht, Viennese Braiding, as it was invented in 18th century Vienna and later most prominently used by Thonet for their No. 14 chair.
Generally, raw rattan is processed into several products to be used as materials in furniture making. From a strand of rattan, the skin is usually peeled off, to be used as rattan weaving material. The remaining "core" of the rattan can be used for various purposes in furniture making. Rattan is a very good material, mainly because it is lightweight, durable, and, to a certain extent, flexible and suitable for outdoor use.
Clothing
Traditionally, the women of the Wemale ethnic group of Seram Island, Indonesia wore rattan girdles around their waist.
Corporal punishment
Thin rattan canes were the standard implement for school corporal punishment in England and Wales, and are still used for this purpose in schools in Malaysia, Singapore, and several African countries. The usual maximum number of strokes was six, traditionally referred to as getting "Six of the best". Similar canes are used for military punishments in the Singapore Armed Forces. Heavier canes, also of rattan, are used for judicial corporal punishments in Aceh, Brunei, Malaysia, and Singapore.
Wicks
Rattan is the preferred natural material used to wick essential oils in aroma reed diffusers (commonly used in aromatherapy, or merely to scent closets, passageways, and rooms), because each rattan reed contains 20 or more permeable channels that wick the oil from the container up the stem and release fragrance into the air, through an evaporation diffusion process. In contrast, reeds made from bamboo contain nodes that inhibit the passage of essential oils.
Handicraft and arts
Many of the properties of rattan that make it suitable for furniture also make it a popular choice for handicraft and art pieces. Uses include rattan baskets, plant containers, and other decorative works.
Due to its durability and resistance to splintering, sections of rattan can be used as canes, crooks for high-end umbrellas, or staves for martial arts. Rattan sticks long, called baston, are used in Filipino martial arts, especially Arnis/Eskrima/Kali and for the striking weapons in the Society for Creative Anachronism's full-contact "armoured combat".
Along with birch and bamboo, rattan is a common material used for the handles in percussion mallets, especially mallets for keyboard percussion, e.g., marimba, vibraphone, xylophone, etc.
Shelter material
Most natives or locals from the rattan rich countries employ the aid of this sturdy plant in their home building projects. It is heavily used as a housing material in rural areas. The skin of the plant or wood is primarily used for weaving.
Sports equipment
Rattan cane is also used traditionally to make polo mallets, though only a small portion of cane harvested (roughly 3%) is strong, flexible, and durable enough to be made into sticks for polo mallets, and popularity of rattan mallets is waning next the more modern variant, fibrecanes.
Weaponry
Fire-hardened rattan were commonly used as the shafts of Philippine spears collectively known as sibat. They were fitted with a variety of iron spearheads and ranged from short throwing versions to heavy thrusting weapons. They were used for hunting, fishing, or warfare (both land and naval warfare). The rattan shafts of war spears are usually elaborately ornamented with carvings and metal inlays. Arnis also makes prominent use of rattan as "arnis sticks", commonly called yantok or baston. Their durability and weight makes it ideal for training with complex execution of techniques as well as being a choice of weapon, even against bladed objects.
Rattan shields were historically used in ancient, medieval and early modern China and Korea. According to some contemporary sources, they were reasonably effective against both arrows and early firearms.
It sees also prominent use in battle re-enactments as stand-ins to potentially lethal weapons.
Rattan can also be used to build a functional sword that delivers a non-lethal but similar impact compared to steel counterparts.
| Biology and health sciences | Arecales (inc. Palms) | Plants |
208660 | https://en.wikipedia.org/wiki/Tropicbird | Tropicbird | Tropicbirds are a family, Phaethontidae, of tropical pelagic seabirds. They are the sole living representatives of the order Phaethontiformes. For many years they were considered part of the Pelecaniformes, but genetics indicates they are most closely related to the Eurypygiformes. There are three species in one genus, Phaethon. The scientific names are derived from Ancient Greek phaethon, "sun". They have predominantly white plumage with elongated tail feathers and small feeble legs and feet.
Taxonomy, systematics and evolution
The genus Phaethon was introduced in 1758 by the Swedish naturalist Carl Linnaeus in the tenth edition of his Systema Naturae. The name is from Ancient Greek phaethōn meaning "sun". The type species was designated as the red-billed tropicbird (Phaethon aethereus) by George Robert Gray in 1840.
Tropicbirds were traditionally grouped in the order Pelecaniformes, which contained the pelicans, cormorants and shags, darters, gannets and boobies and frigatebirds; in the Sibley–Ahlquist taxonomy, the Pelecaniformes were united with other groups into a large "Ciconiiformes". More recently this grouping has been found to be massively paraphyletic (missing closer relatives of its distantly related groups) and split again.
Microscopic analysis of eggshell structure by Konstantin Mikhailov in 1995 found that the eggshells of tropicbirds lacked the covering of thick microglobular material of other Pelecaniformes. Jarvis, et al.'s 2014 paper "Whole-genome analyses resolve early branches in the tree of life of modern birds" aligns the tropicbirds most closely with the sunbittern and the kagu of the Eurypygiformes, with these two clades forming the sister group of the "core water birds", the Aequornithes, and the Metaves hypothesis abandoned.
Family Phaethontidae Brandt 1840
Genus †Proplegadis Harrison & Walker 1971
†Proplegadis fisheri Harrison & Walker 1971
Genus †Phaethusavis Bourdon, Amaghzaz & Bouya 2008
†Phaethusavis pelagicus Bourdon, Amaghzaz & Bouya 2008
Genus †Heliadornis Olson 1985
†H. ashbyi Olson 1985
†H. minor Kessler 2009
†H. paratethydicus Mlíkovský 1997
Genus Phaethon Linnaeus, 1758
Red-billed tropicbird P. aethereus (tropical Atlantic, eastern Pacific, and Indian oceans)
Red-tailed tropicbird, P. rubricauda (Indian Ocean and the western and central tropical Pacific)
White-tailed tropicbird, P. lepturus (widespread in tropical waters, except in the eastern Pacific)
The red-billed tropicbird is basal within the genus. The split between the red-billed tropicbird and the other two tropicbirds is hypothesized to have taken place about six million years ago, with the split between the red-tailed and white-tailed tropicbird taking place about four million years ago.
Phaethusavis and Heliadornis are prehistoric genera of tropicbirds described from fossils.
Extant species
Description
Tropicbirds range in size from 76 cm to 102 cm in length and 94 cm to 112 cm in wingspan. Their plumage is predominantly white, with elongated central tail feathers. The three species have different combinations of black markings on the face, back, and wings. Their bills are large, powerful and slightly decurved. Their heads are large and their necks are short and thick. They have totipalmate feet (that is, all four toes are connected by a web). The legs of a tropicbird are located far back on their body, making walking impossible, so that they can only move on land by pushing themselves forward with their feet.
The tropicbirds' call is typically a loud, piercing, shrill, but grating whistle, or crackle. These are often given in a rapid series when they are in a display flight at the colony. In old literature they were referred to as boatswain (bo'sun'/bosun) birds due their loud whistling calls.
Behaviour and ecology
Tropicbirds frequently catch their prey by hovering and then plunge-diving, typically only into the surface-layer of the waters. They eat mostly fish, especially flying fish, and occasionally squid. Tropicbirds tend to avoid multi-species feeding flocks, unlike the frigatebirds, which have similar diets.
Tropicbirds are usually solitary or in pairs away from breeding colonies. There they engage in spectacular courtship displays. For several minutes, groups of 2–20 birds simultaneously and repeatedly fly around one another in large, vertical circles, while swinging the tail streamers from side to side. If the female likes the presentation, she will mate with the male in his prospective nest-site. Occasionally, disputes will occur between males trying to protect their mates and nesting areas.
Tropicbirds generally nest in holes or crevices on the bare ground. The female lays one white egg, spotted brown, and incubates for 40–46 days. The incubation is performed by both parents, but mostly the female, while the male brings food to feed the female. The chick hatches with grey down. It will stay alone in the nest while both parents search for food, and they will feed the chick twice every three days until fledging, about 12–13 weeks after hatching. The young are not able to fly initially; they will float on the ocean for several days to lose weight before flight.
Tropicbird chicks have slower growth than nearshore birds, and they tend to accumulate fat deposits while young. That, along with one-egg clutches, appears to be an adaptation to a pelagic lifestyle where food is often gathered in large amounts, but may be hard to find.
| Biology and health sciences | Basics | Animals |
208732 | https://en.wikipedia.org/wiki/Highly%20composite%20number | Highly composite number | A highly composite number is a positive integer that has more divisors than all smaller positive integers. If d(n) denotes the number of divisors of a positive integer n, then a positive integer N is highly composite if d(N) > d(n) for all n < N. For example, 6 is highly composite because d(6)=4 and d(n)=1,2,2,3,2 for n=1,2,3,4,5 respectively.
A related concept is that of a largely composite number, a positive integer that has at least as many divisors as all smaller positive integers. The name can be somewhat misleading, as the first two highly composite numbers (1 and 2) are not actually composite numbers; however, all further terms are.
Ramanujan wrote a paper on highly composite numbers in 1915.
The mathematician Jean-Pierre Kahane suggested that Plato must have known about highly composite numbers as he deliberately chose such a number, 5040 (= 7!), as the ideal number of citizens in a city. Furthermore, Vardoulakis and Pugh's paper delves into a similar inquiry concerning the number 5040.
Examples
The first 41 highly composite numbers are listed in the table below . The number of divisors is given in the column labeled d(n). Asterisks indicate superior highly composite numbers.
The divisors of the first 19 highly composite numbers are shown below.
The table below shows all 72 divisors of 10080 by writing it as a product of two numbers in 36 different ways.
The 15,000th highly composite number can be found on Achim Flammenkamp's website. It is the product of 230 primes:
where is the th successive prime number, and all omitted terms (a22 to a228) are factors with exponent equal to one (i.e. the number is ). More concisely, it is the product of seven distinct primorials:
where is the primorial .
Prime factorization
Roughly speaking, for a number to be highly composite it has to have prime factors as small as possible, but not too many of the same. By the fundamental theorem of arithmetic, every positive integer n has a unique prime factorization:
where are prime, and the exponents are positive integers.
Any factor of n must have the same or lesser multiplicity in each prime:
So the number of divisors of n is:
Hence, for a highly composite number n,
the k given prime numbers pi must be precisely the first k prime numbers (2, 3, 5, ...); if not, we could replace one of the given primes by a smaller prime, and thus obtain a smaller number than n with the same number of divisors (for instance 10 = 2 × 5 may be replaced with 6 = 2 × 3; both have four divisors);
the sequence of exponents must be non-increasing, that is ; otherwise, by exchanging two exponents we would again get a smaller number than n with the same number of divisors (for instance 18 = 21 × 32 may be replaced with 12 = 22 × 31; both have six divisors).
Also, except in two special cases n = 4 and n = 36, the last exponent ck must equal 1. It means that 1, 4, and 36 are the only square highly composite numbers. Saying that the sequence of exponents is non-increasing is equivalent to saying that a highly composite number is a product of primorials or, alternatively, the smallest number for its prime signature.
Note that although the above described conditions are necessary, they are not sufficient for a number to be highly composite. For example, 96 = 25 × 3 satisfies the above conditions and has 12 divisors but is not highly composite since there is a smaller number (60) which has the same number of divisors.
Asymptotic growth and density
If Q(x) denotes the number of highly composite numbers less than or equal to x, then there are two constants a and b, both greater than 1, such that
The first part of the inequality was proved by Paul Erdős in 1944 and the second part by Jean-Louis Nicolas in 1988. We have
and
Related sequences
Highly composite numbers greater than 6 are also abundant numbers. One need only look at the three largest proper divisors of a particular highly composite number to ascertain this fact. It is false that all highly composite numbers are also Harshad numbers in base 10. The first highly composite number that is not a Harshad number is 245,044,800; it has a digit sum of 27, which does not divide evenly into 245,044,800.
10 of the first 38 highly composite numbers are superior highly composite numbers.
The sequence of highly composite numbers is a subset of the sequence of smallest numbers k with exactly n divisors .
Highly composite numbers whose number of divisors is also a highly composite number are
1, 2, 6, 12, 60, 360, 1260, 2520, 5040, 55440, 277200, 720720, 3603600, 61261200, 2205403200, 293318625600, 6746328388800, 195643523275200 .
It is extremely likely that this sequence is complete.
A positive integer n is a largely composite number if d(n) ≥ d(m) for all m ≤ n. The counting function QL(x) of largely composite numbers satisfies
for positive c and d with .
Because the prime factorization of a highly composite number uses all of the first k primes, every highly composite number must be a practical number. Due to their ease of use in calculations involving fractions, many of these numbers are used in traditional systems of measurement and engineering designs.
| Mathematics | Sums and products | null |
208742 | https://en.wikipedia.org/wiki/Mimas | Mimas | Mimas, also designated Saturn I, is the seventh-largest natural satellite of Saturn. With a mean diameter of , Mimas is the smallest astronomical body known to be roughly rounded in shape due to its own gravity. Mimas's low density, 1.15 g/cm3, indicates that it is composed mostly of water ice with only a small amount of rock, and study of Mimas's motion suggests that it may have a liquid ocean beneath its surface ice. The surface of Mimas is heavily cratered and shows little signs of recent geological activity. A notable feature of Mimas's surface is Herschel, one of the largest craters relative to the size of the parent body in the Solar System. Herschel measures across, about one-third of Mimas's mean diameter, and formed from an extremely energetic impact event. The crater's name is derived from the discoverer of Mimas, William Herschel, in 1789. The moon's presence has created one of the largest 'gaps' in Saturn's ring, named the Cassini Division, due to orbital resonance destabilising the particles' orbit there.
Discovery
Mimas was discovered by the astronomer William Herschel on 17 September 1789. He recorded his discovery as follows:
The 40-foot telescope was a metal mirror reflecting telescope built by Herschel, with a aperture. The 40 feet refers to the length of the focus, not the aperture diameter as is more common with modern telescopes.
Name
Mimas is named after one of the Giants in Greek mythology, Mimas. The names of all seven then-known satellites of Saturn, including Mimas, were suggested by William Herschel's son John in his 1847 publication Results of Astronomical Observations made at the Cape of Good Hope. Saturn (the Roman equivalent of Cronus in Greek mythology) was the leader of the Titans, the generation before the Gods, and rulers of the world for some time, while the Giants were the subsequent generation, and each group fought a great struggle against Zeus and the Olympians.
The customary English pronunciation of the name is , or sometimes .
The Greek and Latin root of the name is Mimant- (cf. Italian Mimante, Russian Мимант for the mythological figure), and so the English adjectival form is Mimantean or Mimantian, either spelling pronounced ~ .
Physical characteristics
Mimas is the smallest and innermost of Saturn's major moons. The surface area of Mimas is slightly less than the land area of Spain or California. The low density of Mimas, 1.15 g/cm3, indicates that it is composed mostly of water ice with only a small amount of rock. As a result of the tidal forces acting on it, Mimas is noticeably oblate; its longest axis is about 10% longer than the shortest. The ellipsoidal shape of Mimas is especially noticeable in some recent images from the Cassini probe. Mimas's most distinctive feature is a giant impact crater across, named Herschel after the discoverer of Mimas. Herschel's diameter is almost a third of Mimas's own diameter; its walls are approximately high, parts of its floor measure deep, and its central peak rises above the crater floor. If there were a crater of an equivalent scale on Earth (in relative size) it would be over in diameter, wider than Australia. The impact that made this crater must have nearly shattered Mimas: the surface antipodal to Herschel (opposite through the globe) is highly disrupted, indicating that the shock waves created by the Herschel impact propagated through the whole moon. See for example figure 4 of
The Mimantean surface is saturated with smaller impact craters, but no others are anywhere near the size of Herschel. Although Mimas is heavily cratered, the cratering is not uniform. Most of the surface is covered with craters larger than in diameter, but in the south polar region, there are generally no craters larger than in diameter.
Three types of geological features are officially recognised on Mimas: craters, chasmata (chasms), and catenae (crater chains).
By studying Mimas's movement, researchers have found that it has a water ocean beneath of surface ice. The ocean formed within the last 25 million years, perhaps even the last 2-3 million years, and is thought to be warmed by Saturn's tidal forces.
Orbital resonances
A number of features in Saturn's rings are related to resonances with Mimas. Mimas is responsible for clearing the material from the Cassini Division, the gap between Saturn's two widest rings, the A Ring and B Ring. Particles in the Huygens Gap at the inner edge of the Cassini division are in a 2:1 orbital resonance with Mimas. They orbit twice for each orbit of Mimas. The repeated pulls by Mimas on the Cassini division particles, always in the same direction in space, force them into new orbits outside the gap. The boundary between the C and B rings is in a 3:1 resonance with Mimas. Recently, the G Ring was found to be in a 7:6 co-rotation eccentricity resonance with Mimas; the ring's inner edge is about inside Mimas's orbit.
Mimas is also in a 2:1 mean-motion resonance with the larger moon Tethys, and in a 2:3 resonance with the outer F Ring shepherd moonlet, Pandora. A moon co-orbital with Mimas was reported by Stephen P. Synnott and Richard J. Terrile in 1982, but was never confirmed.
Anomalous libration and subsurface ocean
In 2014, researchers noted that the librational motion of Mimas has a component that cannot be explained by its orbit alone, and concluded that it was due to either an interior that is not in hydrostatic equilibrium (an elongated core) or an internal ocean. However, in 2017 it was concluded that the presence of an ocean in Mimas's interior would have led to surface tidal stresses comparable to or greater than those on tectonically active Europa. Thus, the lack of evidence for surface cracking or other tectonic activity on Mimas argues against the presence of such an ocean; as the formation of a core would have also produced an ocean and thus the nonexistent tidal stresses, that possibility is also unlikely. The presence of an asymmetric mass anomaly associated with the crater Herschel was considered to be a more likely explanation for the libration.
In 2022, scientists at the Southwest Research Institute identified a tidal heating model for Mimas that produced an internal ocean without any surface cracking or visible tidal stresses. The presence of an internal ocean concealed by a stable icy shell between 24 and 31 km in thickness was found to match the visual and librational characteristics of Mimas as observed by Cassini. Continued measurements of Mimas's surface heat flux will be needed to confirm this hypothesis.
On 7 February 2024, researchers at the Paris Observatory announced the discovery that Mimas's orbit apsidally precesses slower than predicted if it were a solid body, which further supports the existence of a subsurface ocean in Mimas. The researchers estimated the ocean to be located 20 to 30 km below the surface, consistent with previous estimates. The researchers suggest that Mimas's ocean must be very young, less than 25 million years old, to explain the lack of geological activity on Mimas's cratered surface.
Exploration
Pioneer 11 flew by Saturn in 1979, and its closest approach to Mimas was 104,263 km on 1 September 1979. Voyager 1 flew by in 1980, and Voyager 2 in 1981.
Mimas was imaged several times by the Cassini orbiter, which entered into orbit around Saturn in 2004. A close flyby occurred on 13 February 2010, when Cassini passed by Mimas at .
In popular culture
When seen from certain angles, Mimas resembles the Death Star, a fictional space station and superweapon known from the 1977 film Star Wars. Herschel resembles the concave disc of the Death Star's "superlaser". This is a coincidence, as the film was made nearly three years before Mimas was resolved well enough to see the crater.
In 2010, NASA revealed a temperature map of Mimas, using images obtained by Cassini. The warmest regions, which are along one edge of Mimas, create a shape similar to the video game character Pac-Man, with Herschel Crater assuming the role of an "edible dot" or "power pellet" known from Pac-Man gameplay.
| Physical sciences | Solar System | Astronomy |
208810 | https://en.wikipedia.org/wiki/Eddington%20luminosity | Eddington luminosity | The Eddington luminosity, also referred to as the Eddington limit, is the maximum luminosity a body (such as a star) can achieve when there is balance between the force of radiation acting outward and the gravitational force acting inward. The state of balance is called hydrostatic equilibrium. When a star exceeds the Eddington luminosity, it will initiate a very intense radiation-driven stellar wind from its outer layers. Since most massive stars have luminosities far below the Eddington luminosity, their winds are driven mostly by the less intense line absorption. The Eddington limit is invoked to explain the observed luminosities of accreting black holes such as quasars.
Originally, Sir Arthur Eddington took only the electron scattering into account when calculating this limit, something that now is called the classical Eddington limit. Nowadays, the modified Eddington limit also takes into account other radiation processes such as bound–free and free–free radiation interaction.
Derivation
The Eddington limit is obtained by setting the outward radiation pressure equal to the inward gravitational force. Both forces decrease by inverse-square laws, so once equality is reached, the hydrodynamic flow is the same throughout the star.
From Euler's equation in hydrostatic equilibrium, the mean acceleration is zero,
where is the velocity, is the pressure, is the density, and is the gravitational potential. If the pressure is dominated by radiation pressure associated with an irradiance ,
Here is the opacity of the stellar material, defined as the fraction of radiation energy flux absorbed by the medium per unit density and unit length. For ionized hydrogen, , where is the Thomson scattering cross-section for the electron and is the mass of a proton. Note that is defined as the energy flux over a surface, which can be expressed with the momentum flux using for radiation. Therefore, the rate of momentum transfer from the radiation to the gaseous medium per unit density is , which explains the right-hand side of the above equation.
The luminosity of a source bounded by a surface may be expressed with these relations as
Now assuming that the opacity is a constant, it can be brought outside the integral. Using Gauss's theorem and Poisson's equation gives
where is the mass of the central object. This result is called the Eddington luminosity. For pure ionized hydrogen,
where is the mass of the Sun and is the luminosity of the Sun.
The maximum possible luminosity of a source in hydrostatic equilibrium is the Eddington luminosity. If the luminosity exceeds the Eddington limit, then the radiation pressure drives an outflow.
The mass of the proton appears because, in the typical environment for the outer layers of a star, the radiation pressure acts on electrons, which are driven away from the center. Because protons are negligibly pressured by the analog of Thomson scattering, due to their larger mass, the result is to create a slight charge separation and therefore a radially directed electric field, acting to lift the positive charges, which, under the conditions in stellar atmospheres, typically are free protons. When the outward electric field is sufficient to levitate the protons against gravity, both electrons and protons are expelled together.
Different limits for different materials
The derivation above for the outward light pressure assumes a hydrogen plasma. In other circumstances the pressure balance can be different from what it is for hydrogen.
In an evolved star with a pure helium atmosphere, the electric field would have to lift a helium nucleus (an alpha particle), with nearly 4 times the mass of a proton, while the radiation pressure would act on 2 free electrons. Thus twice the usual Eddington luminosity would be needed to drive off an atmosphere of pure helium.
At very high temperatures, as in the environment of a black hole or neutron star, high-energy photons can interact with nuclei, or even with other photons, to create an electron–positron plasma. In that situation the combined mass of the positive–negative charge carrier pair is approximately 918 times smaller (half of the proton-to-electron mass ratio), while the radiation pressure on the positrons doubles the effective upward force per unit mass, so the limiting luminosity needed is reduced by a factor of ≈ 918×2.
The exact value of the Eddington luminosity depends on the chemical composition of the gas layer and the spectral energy distribution of the emission. A gas with cosmological abundances of hydrogen and helium is much more transparent than gas with solar abundance ratios. Atomic line transitions can greatly increase the effects of radiation pressure, and line-driven winds exist in some bright stars (e.g., Wolf–Rayet and O-type stars).
Super-Eddington luminosities
The role of the Eddington limit in today's research lies in explaining the very high mass loss rates seen in, for example, the series of outbursts of η Carinae in 1840–1860. The regular, line-driven stellar winds can only explain a mass loss rate of around ~ solar masses per year, whereas losses of up to per year are needed to understand the η Carinae outbursts. This can be done with the help of the super-Eddington winds driven by broad-spectrum radiation.
Gamma-ray bursts, novae and supernovae are examples of systems exceeding their Eddington luminosity by a large factor for very short times, resulting in short and highly intensive mass loss rates. Some X-ray binaries and active galaxies are able to maintain luminosities close to the Eddington limit for very long times. For accretion-powered sources such as accreting neutron stars or cataclysmic variables (accreting white dwarfs), the limit may act to reduce or cut off the accretion flow, imposing an Eddington limit on accretion corresponding to that on luminosity. Super-Eddington accretion onto stellar-mass black holes is one possible model for ultraluminous X-ray sources (ULXSs).
For accreting black holes, not all the energy released by accretion has to appear as outgoing luminosity, since energy can be lost through the event horizon, down the hole. Such sources effectively may not conserve energy. Then the accretion efficiency, or the fraction of energy actually radiated of that theoretically available from the gravitational energy release of accreting material, enters in an essential way.
Other factors
The Eddington limit is not a strict limit on the luminosity of a stellar object. The limit does not consider several potentially important factors, and super-Eddington objects have been observed that do not seem to have the predicted high mass-loss rate. Other factors that might affect the maximum luminosity of a star include:
Porosity. A problem with steady winds driven by broad-spectrum radiation is that both the radiative flux and gravitational acceleration scale with r−2. The ratio between these factors is constant, and in a super-Eddington star, the whole envelope would become gravitationally unbound at the same time. This is not observed. A possible solution is introducing an atmospheric porosity, where we imagine the stellar atmosphere to consist of denser regions surrounded by regions of lower-density gas. This would reduce the coupling between radiation and matter, and the full force of the radiation field would be seen only in the more homogeneous outer, lower-density layers of the atmosphere.
Turbulence. A possible destabilizing factor might be the turbulent pressure arising when energy in the convection zones builds up a field of supersonic turbulence. The importance of turbulence is being debated, however.
Photon bubbles. Another factor that might explain some stable super-Eddington objects is the photon bubble effect. Photon bubbles would develop spontaneously in radiation-dominated atmospheres when the radiation pressure exceeds the gas pressure. We can imagine a region in the stellar atmosphere with a density lower than the surroundings, but with a higher radiation pressure. Such a region would rise through the atmosphere, with radiation diffusing in from the sides, leading to an even higher radiation pressure. This effect could transport radiation more efficiently than a homogeneous atmosphere, increasing the allowed total radiation rate. Accretion discs may exhibit luminosities as high as 10–100 times the Eddington limit without experiencing instabilities.
Humphreys–Davidson limit
Observations of massive stars show a clear upper limit to their luminosity, termed the Humphreys–Davidson limit after the researchers who first wrote about it.
Only highly unstable objects are found, temporarily, at higher luminosities. Efforts to reconcile this with the theoretical Eddington limit have been largely unsuccessful.
The H–D limit for cool supergiants is placed at around 320,000 .
| Physical sciences | Active galactic nucleus | Astronomy |
208996 | https://en.wikipedia.org/wiki/Visual%20Basic%20%28.NET%29 | Visual Basic (.NET) | Visual Basic (VB), originally called Visual Basic .NET (VB.NET), is a multi-paradigm, object-oriented programming language, implemented on .NET, Mono, and the .NET Framework. Microsoft launched VB.NET in 2002 as the successor to its original Visual Basic language, the last version of which was Visual Basic 6.0. Although the ".NET" portion of the name was dropped in 2005, this article uses "Visual Basic [.NET]" to refer to all Visual Basic languages released since 2002, in order to distinguish between them and the classic Visual Basic. Along with C# and F#, it is one of the three main languages targeting the .NET ecosystem. Microsoft updated its VB language strategy on 6 February 2023, stating that VB is a stable language now and Microsoft will keep maintaining it.
Microsoft's integrated development environment (IDE) for developing in Visual Basic is Visual Studio. Most Visual Studio editions are commercial; the only exceptions are Visual Studio Express and Visual Studio Community, which are freeware. In addition, the .NET Framework SDK includes a freeware command-line compiler called vbc.exe. Mono also includes a command-line VB.NET compiler.
Visual Basic is often used in conjunction with the Windows Forms GUI library to make desktop apps for Windows. Programming for Windows Forms with Visual Basic involves dragging and dropping controls on a form using a GUI designer and writing corresponding code for each control.
Use in making GUI programs
The Windows Forms library is most commonly used to create GUI interfaces in Visual Basic. All visual elements in the Windows Forms class library derive from the Control class. This provides the minimal functionality of a user interface element such as location, size, color, font, text, as well as common events like click and drag/drop. The Control class also has docking support to let a control rearrange its position under its parent.
Forms are typically designed in the Visual Studio IDE. In Visual Studio, forms are created using drag-and-drop techniques. A tool is used to place controls (e.g., text boxes, buttons, etc.) on the form (window). Controls have attributes and event handlers associated with them. Default values are provided when the control is created, but may be changed by the programmer. Many attribute values can be modified during run time based on user actions or changes in the environment, providing a dynamic application. For example, code can be inserted into the form resize event handler to reposition a control so that it remains centered on the form, expands to fill up the form, etc. By inserting code into the event handler for a keypress in a text box, the program can automatically translate the case of the text being entered, or even prevent certain characters from being inserted.
Syntax
Visual Basic uses statements to specify actions. The most common statement is an expression statement, consisting of an expression to be evaluated, on a single line. As part of that evaluation, functions or subroutines may be called and variables may be assigned new values. To modify the normal sequential execution of statements, Visual Basic provides several control-flow statements identified by reserved keywords. Structured programming is supported by several constructs including two conditional execution constructs (If ... Then ... Else ... End If and Select Case ... Case ... End Select ) and four iterative execution (loop) constructs (Do ... Loop, For ... To, For Each, and While ... End While) . The For ... To statement has separate initialisation and testing sections, both of which must be present. (See examples below.) The For Each statement steps through each value in a list.
In addition, in Visual Basic:
There is no unified way of defining blocks of statements. Instead, certain keywords, such as "If … Then" or "Sub" are interpreted as starters of sub-blocks of code and have matching termination keywords such as "End If" or "End Sub".
Statements are terminated either with a colon (":") or with the end of line. Multiple-line statements in Visual Basic are enabled with " _" at the end of each such line. The need for the underscore continuation character was largely removed in version 10 and later versions.
The equals sign ("=") is used in both assigning values to variables and in comparison.
Round brackets (parentheses) are used with arrays, both to declare them and to get a value at a given index in one of them. Visual Basic uses round brackets to define the parameters of subroutines or functions.
A single quotation mark (') or the keyword REM, placed at the beginning of a line or after any number of space or tab characters at the beginning of a line, or after other code on a line, indicates that the (remainder of the) line is a comment.
Simple example
The following is a very simple Visual Basic program, a version of the classic "Hello, World!" example created as a console application:
Module Module1
Sub Main()
' The classic "Hello, World!" demonstration program
Console.WriteLine("Hello, World!")
End Sub
End Module
It prints "Hello, World!" on a command-line window. Each line serves a specific purpose, as follows:
Module Module1
This is a module definition. Modules are a division of code, which can contain any kind of object, like constants or variables, functions or methods, or classes, but can not be instantiated as objects like classes and cannot inherit from other modules. Modules serve as containers of code that can be referenced from other parts of a program.It is common practice for a module and the code file which contains it to have the same name. However, this is not required, as a single code file may contain more than one module or class.
Sub Main()
This line defines a subroutine called "Main". "Main" is the entry point, where the program begins execution.
Console.WriteLine("Hello, world!")
This line performs the actual task of writing the output. Console is a system object, representing a command-line interface (also known as a "console") and granting programmatic access to the operating system's standard streams. The program calls the Console method WriteLine, which causes the string passed to it to be displayed on the console.
Instead of Console.WriteLine, one could use MsgBox, which prints the message in a dialog box instead of a command-line window.
Complex example
This piece of code outputs Floyd's Triangle to the console:
Imports System.Console
Module Program
Sub Main()
Dim rows As Integer
' Input validation.
Do Until Integer.TryParse(ReadLine("Enter a value for how many rows to be displayed: " & vbcrlf), rows) AndAlso rows >= 1
WriteLine("Allowed range is 1 and {0}", Integer.MaxValue)
Loop
' Output of Floyd's Triangle
Dim current As Integer = 1
Dim row As Integer
Dim column As Integer
For row = 1 To rows
For column = 1 To row
Write("{0,-2} ", current)
current += 1
Next
WriteLine()
Next
End Sub
''' <summary>
''' Like Console.ReadLine but takes a prompt string.
''' </summary>
Function ReadLine(Optional prompt As String = Nothing) As String
If prompt IsNot Nothing Then
Write(prompt)
End If
Return Console.ReadLine()
End Function
End Module
Comparison with the classic Visual Basic
Whether Visual Basic .NET should be considered as just another version of Visual Basic or a completely different language is a topic of debate. There are new additions to support new features, such as structured exception handling and short-circuited expressions. Also, two important data-type changes occurred with the move to VB.NET: compared to Visual Basic 6, the Integer data type has been doubled in length from 16 bits to 32 bits, and the Long data type has been doubled in length from 32 bits to 64 bits. This is true for all versions of VB.NET. A 16-bit integer in all versions of VB.NET is now known as a Short. Similarly, the Windows Forms editor is very similar in style and function to the Visual Basic form editor.
The things that have changed significantly are the semantics—from those of an object-based programming language running on a deterministic, reference-counted engine based on COM to a fully object-oriented language backed by the .NET Framework, which consists of a combination of the Common Language Runtime (a virtual machine using generational garbage collection and a just-in-time compilation engine) and a far larger class library. The increased breadth of the latter is also a problem that VB developers have to deal with when coming to the language, although this is somewhat addressed by the My feature in Visual Studio 2005.
The changes have altered many underlying assumptions about the "right" thing to do with respect to performance and maintainability. Some functions and libraries no longer exist; others are available, but not as efficient as the "native" .NET alternatives. Even if they compile, most converted Visual Basic 6 applications will require some level of refactoring to take full advantage of the new language. Documentation is available to cover changes in the syntax, debugging applications, deployment and terminology.
Comparative examples
The following simple examples compare VB and VB.NET syntax. They assume that the developer has created a form, placed a button on it and has associated the subroutines demonstrated in each example with the click event handler of the mentioned button. Each example creates a "Hello, World" message box after the button on the form is clicked.
Visual Basic 6:
Private Sub Command1_Click()
MsgBox "Hello, World"
End Sub
VB.NET (MsgBox or MessageBox class can be used):
Private Sub Button1_Click(sender As object, e As EventArgs) Handles Button1.Click
MsgBox("Hello, World")
End Sub
Both Visual Basic 6 and Visual Basic .NET automatically generate the Sub and End Sub statements when the corresponding button is double-clicked in design view. Visual Basic .NET will also generate the necessary Class and End Class statements. The developer need only add the statement to display the "Hello, World" message box.
All procedure calls must be made with parentheses in VB.NET, whereas in Visual Basic 6 there were different conventions for functions (parentheses required) and subs (no parentheses allowed, unless called using the keyword Call).
The names Command1 and Button1 are not obligatory. However, these are default names for a command button in Visual Basic 6 and VB.NET respectively.
In VB.NET, the Handles keyword is used to make the sub Button1_Click a handler for the Click event of the object Button1. In Visual Basic 6, event handler subs must have a specific name consisting of the object's name (), an underscore (), and the event's name (, hence ).
There is a function called MessageBox.Show in the Microsoft.VisualBasic namespace which can be used (instead of MsgBox) similarly to the corresponding function in Visual Basic 6. There is a controversy about which function to use as a best practice (not only restricted to showing message boxes but also regarding other features of the Microsoft.VisualBasic namespace). Some programmers prefer to do things "the .NET way", since the Framework classes have more features and are less language-specific. Others argue that using language-specific features makes code more readable (for example, using int (C#) or Integer (VB.NET) instead of System.Int32).
In Visual Basic 2008, the inclusion of , has become optional.
The following example demonstrates a difference between Visual Basic 6 and VB.NET. Both examples close the active window.
Visual Basic 6:
Sub cmdClose_Click()
Unload Me
End Sub
VB.NET:
Sub btnClose_Click(sender As Object, e As EventArgs) Handles btnClose.Click
Close()
End Sub
The 'cmd' prefix is replaced by the 'btn' prefix, conforming to the new convention previously mentioned.
Visual Basic 6 did not provide common operator shortcuts. The following are equivalent:
Visual Basic 6:
Sub Timer1_Timer()
'Reduces Form Height by one pixel per tick
Me.Height = Me.Height - 1
End Sub
VB.NET:
Sub Timer1_Tick(sender As Object, e As EventArgs) Handles Timer1.Tick
Me.Height -= 1
End Sub
Comparison with C#
C# and Visual Basic are Microsoft's first languages made to program on the .NET Framework (later adding F# and more; others have also added languages). Though C# and Visual Basic are syntactically different, that is where the differences mostly end. Microsoft developed both of these languages to be part of the same .NET Framework development platform. They are both developed, managed, and supported by the same language development team at Microsoft. They compile to the same intermediate language (IL), which runs against the same .NET Framework runtime libraries. Although there are some differences in the programming constructs, their differences are primarily syntactic and, assuming one avoids the Visual Basic "Compatibility" libraries provided by Microsoft to aid conversion from Visual Basic 6, almost every feature in VB has an equivalent feature in C# and vice versa. Lastly, both languages reference the same Base Classes of the .NET Framework to extend their functionality. As a result, with few exceptions, a program written in either language can be run through a simple syntax converter to translate to the other. There are many open source and commercially available products for this task.
Examples
Hello World!
Windows Forms Application
Requires a button called Button1.Public Class Form1
Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
MsgBox("Hello world!", MsgBoxStyle.Information, "Hello world!") ' Show a message that says "Hello world!".
End Sub
End Class
Console Application
Module Module1
Sub Main()
Console.WriteLine("Hello world!") ' Write in the console "Hello world!" and start a new line.
Console.ReadKey() ' The user must press any key before the application ends.
End Sub
End Module
Speaking
Windows Forms Application
Requires a TextBox titled 'TextBox1' and a button called Button1.Public Class Form1
Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
CreateObject("Sapi.Spvoice").Speak(TextBox1.Text)
End Sub
End Class
Console Application
Module Module1
Private Voice = CreateObject("Sapi.Spvoice")
Private Text As String
Sub Main()
Console.Write("Enter the text to speak: ") ' Say "Enter the text to speak: "
Text = Console.ReadLine() ' The user must enter the text to speak.
Voice.Speak(Text) ' Speak the text the user has entered.
End Sub
End Module
Version history
Succeeding the classic Visual Basic version 6.0, the first version of Visual Basic .NET debuted in 2002. , ten versions of Visual Basic .NET are released.
2002 (VB 7.0)
The first version, Visual Basic .NET, relies on .NET Framework 1.0. The most important feature is managed code, which contrasts with the classic Visual Basic.
2003 (VB 7.1)
Visual Basic .NET 2003 was released with .NET Framework 1.1. New features included support for the .NET Compact Framework and a better VB upgrade wizard. Improvements were also made to the performance and reliability of .NET IDE (particularly the background compiler) and runtime. In addition, Visual Basic .NET 2003 was available in the Visual Studio.NET Academic Edition, distributed to a certain number of scholars from each country without cost.
2005 (VB 8.0)
After Visual Basic .NET 2003, Microsoft dropped ".NET" from the name of the product, calling the next version Visual Basic 2005.
For this release, Microsoft added many features intended to reinforce Visual Basic .NET's focus as a rapid application development platform and further differentiate it from C#., including:
Edit and Continue feature
Design-time expression evaluation
A pseudo-namespace called "My", which provides:
Easy access to certain areas of the .NET Framework that otherwise require significant code to access like using My.Form2.Text = " MainForm " rather than System.WindowsApplication1.Forms.Form2.text = " MainForm "
Dynamically generated classes (e.g. My.Forms)
Improved VB-to-VB.NET converter
A "using" keyword, simplifying the use of objects that require the Dispose pattern to free resources
Just My Code feature, which hides (steps over) boilerplate code written by the Visual Studio .NET IDE and system library code during debugging
Data Source binding, easing database client/server development
To bridge the gaps between itself and other .NET languages, this version added:
Generics
Partial classes, a method of defining some parts of a class in one file and then adding more definitions later; particularly useful for integrating user code with auto-generated code
Operator overloading and nullable types
Support for unsigned integer data types commonly used in other languages
Visual Basic 2005 introduced the IsNot operator that makes 'If X IsNot Y' equivalent to 'If Not X Is Y'. It gained notoriety when it was found to be the subject of a Microsoft patent application.
2008 (VB 9.0)
Visual Basic 9.0 was released along with .NET Framework 3.5 on November 19, 2007.
For this release, Microsoft added many features, including:
A true conditional operator, "If(condition as Boolean, truepart, falsepart)", to replace the "IIf" function.
Anonymous types
Support for LINQ
Lambda expressions
XML Literals
Type Inference
Extension methods
2010 (VB 10.0)
In April 2010, Microsoft released Visual Basic 2010. Microsoft had planned to use Dynamic Language Runtime (DLR) for that release but shifted to a co-evolution strategy between Visual Basic and sister language C# to bring both languages into closer parity with one another. Visual Basic's innate ability to interact dynamically with CLR and COM objects has been enhanced to work with dynamic languages built on the DLR such as IronPython and IronRuby. The Visual Basic compiler was improved to infer line continuation in a set of common contexts, in many cases removing the need for the " _" line continuation characters. Also, existing support of inline Functions was complemented with support for inline Subs as well as multi-line versions of both Sub and Function lambdas.
2012 (VB 11.0)
Visual Basic 2012 was released alongside .NET Framework 4.5. Major features introduced in this version include:
Asynchronous programming with "async" and "await" statements
Iterators
Call hierarchy
Caller information
"Global" keyword in "namespace" statements
2013 (VB 12.0)
Visual Basic 2013 was released alongside .NET Framework 4.5.1 with Visual Studio 2013. Can also build .NET Framework 4.5.2 applications by installing Developer Pack.
2015 (VB 14.0)
Visual Basic 2015 (code named VB "14.0") was released with Visual Studio 2015. Language features include a new "?." operator to perform inline null checks, and a new string interpolation feature is included to format strings inline.
2017 (VB 15.x)
Visual Basic 2017 (code named VB "15.0") was released with Visual Studio 2017.
Extends support for new Visual Basic 15 language features with revision 2017, 15.3, 15.5, 15.8. Introduces new refactorings that allow organizing source code with one action.
2019 (VB 16.0)
Visual Basic 2019 (code named VB "16.0") was released with Visual Studio 2019. It is the first version of Visual Basic focused on .NET Core.
Cross-platform and open-source development
The official Visual Basic compiler is written in Visual Basic and is available on GitHub as a part of the .NET Compiler Platform. The creation of open-source tools for Visual Basic development has been slow compared to C#, although the Mono development platform provides an implementation of Visual Basic-specific libraries and a Visual Basic 2005 compatible compiler written in Visual Basic, as well as standard framework libraries such as Windows Forms GUI library.
MonoDevelop was an open-source alternative IDE. The Gambas environment is also similar but distinct from Visual Basic, as is the Visual FB Editor for FreeBasic.
| Technology | Programming languages | null |
209005 | https://en.wikipedia.org/wiki/Noon | Noon | Noon (also known as noontime or midday) is 12 o'clock in the daytime. It is written as 12 noon, 12:00 m. (for meridiem, literally 12:00 midday), 12 p.m. (for post meridiem, literally "after midday"), 12 pm, or 12:00 (using a 24-hour clock) or 1200 (military time).
Solar noon is the time when the Sun appears to contact the local celestial meridian. This is when the Sun reaches its apparent highest point in the sky, at 12 noon apparent solar time and can be observed using a sundial. The local or clock time of solar noon depends on the date, longitude, and time zone, with Daylight Saving Time tending to place solar noon closer to 1:00pm.
Etymology
The word noon is derived from Latin nona hora, the ninth canonical hour of the day, in reference to the Western Christian liturgical term Nones (liturgy), (number nine), one of the seven fixed prayer times in traditional Christian denominations. The Roman and Western European medieval monastic day began at 6:00 a.m. (06:00) at the equinox by modern timekeeping, so the ninth hour started at what is now 3:00 p.m. (15:00) at the equinox. In English, the meaning of the word shifted to midday and the time gradually moved back to 12:00 local timethat is, not taking into account the modern invention of time zones. The change began in the 12th century and was fixed by the 14th century.
Solar noon
Solar noon, also known as the local apparent solar noon and Sun transit time (informally high noon), is the moment when the Sun contacts the observer's meridian (culmination or meridian transit), reaching its highest position above the horizon on that day and casting the shortest shadow. This is also the origin of the terms ante meridiem (a.m.) and post meridiem (p.m.), as noted below. The Sun is directly overhead at solar noon at the Equator on the equinoxes, at the Tropic of Cancer (latitude N) on the June solstice and at the Tropic of Capricorn ( S) on the December solstice. In the Northern Hemisphere, north of the Tropic of Cancer, the Sun is due south of the observer at solar noon; in the Southern Hemisphere, south of the Tropic of Capricorn, it is due north.
When the Sun contacts the observer's meridian at the observer's zenith, it is perceived to be directly overhead and no shadows are cast. This occurs at Earth's subsolar point, a point which moves around the tropics throughout the year.
The elapsed time from the local solar noon of one day to the next is exactly 24 hours on only four instances in any given year. This occurs when the effects of Earth's obliquity of ecliptic and its orbital speed around the Sun offset each other. These four days for the current epoch are centered on 11 February, 13 May, 26 July, and 3 November. It occurs at only one particular line of longitude in each instance. This line varies year to year, since Earth's true year is not an integer number of days. This event time and location also varies due to Earth's orbit being gravitationally perturbed by the planets. These four 24-hour days occur in both hemispheres simultaneously. The precise Coordinated Universal Times for these four days also mark when the opposite line of longitude, 180° away, experiences precisely 24 hours from local midnight to local midnight the next day. Thus, four varying great circles of longitude define from year to year when a 24-hour day (noon to noon or midnight to midnight) occurs.
The two longest time spans from noon to noon occur twice each year, around 20 June (24 hours plus 13 seconds) and 21 December (24 hours plus 30 seconds). The shortest time spans occur twice each year, around 25 March (24 hours minus 18 seconds) and 13 September (24 hours minus 22 seconds).
For the same reasons, solar noon and "clock noon" are usually not the same. The equation of time shows that the reading of a clock at solar noon will be higher or lower than 12:00 by as much as 16 minutes. Additionally, due to the political nature of time zones, as well as the application of daylight saving time, it can be off by more than an hour.
Nomenclature
In the US, noon is commonly indicated by 12 p.m., and midnight by 12 a.m. While some argue that such usage is "improper" based on the Latin meaning (a.m. stands for ante meridiem and p.m. for post meridiem, meaning "before midday" and "after midday" respectively), digital clocks are unable to display anything else, and an arbitrary decision must be made. An earlier standard of indicating noon as "12M" or "12m" (for "meridies"), which was specified in the U.S. GPO Government Style Manual, has fallen into relative obscurity; the current edition of the GPO makes no mention of it. However, due to the lack of an international standard, the use of "12 a.m." and "12 p.m." can be confusing. Common alternative methods of representing these times are:
to use a 24-hour clock (00:00 and 12:00, 24:00; but never 24:01)
to use "12 noon" or "12 midnight" (though "12 midnight" may still present ambiguity regarding the specific date)
to specify midnight as between two successive days or dates (as in "midnight Saturday/Sunday" or "midnight December 14/15")
to avoid those specific times and to use "11:59 p.m." or "12:01 a.m." instead. (This is common in the travel industry to avoid confusion to passengers' schedules, especially train and plane schedules.)
| Physical sciences | Celestial mechanics | Astronomy |
209128 | https://en.wikipedia.org/wiki/Quotient%20rule | Quotient rule | In calculus, the quotient rule is a method of finding the derivative of a function that is the ratio of two differentiable functions. Let , where both and are differentiable and The quotient rule states that the derivative of is
It is provable in many ways by using other derivative rules.
Examples
Example 1: Basic example
Given , let , then using the quotient rule:
Example 2: Derivative of tangent function
The quotient rule can be used to find the derivative of as follows:
Reciprocal rule
The reciprocal rule is a special case of the quotient rule in which the numerator . Applying the quotient rule gives
Utilizing the chain rule yields the same result.
→8==Proofs==
Proof from derivative definition and limit properties
Let Applying the definition of the derivative and properties of limits gives the following proof, with the term added and subtracted to allow splitting and factoring in subsequent steps without affecting the value:The limit evaluation is justified by the differentiability of , implying continuity, which can be expressed as .
Proof using implicit differentiation
Let so that
The product rule then gives
Solving for and substituting back for gives:
Proof using the reciprocal rule or chain rule
Let
Then the product rule gives
To evaluate the derivative in the second term, apply the reciprocal rule, or the power rule along with the chain rule:
Substituting the result into the expression gives
Proof by logarithmic differentiation
Let Taking the absolute value and natural logarithm of both sides of the equation gives
Applying properties of the absolute value and logarithms,
Taking the logarithmic derivative of both sides,
Solving for and substituting back for gives:
Taking the absolute value of the functions is necessary for the logarithmic differentiation of functions that may have negative values, as logarithms are only real-valued for positive arguments. This works because , which justifies taking the absolute value of the functions for logarithmic differentiation.
Higher order derivatives
Implicit differentiation can be used to compute the th derivative of a quotient (partially in terms of its first derivatives). For example, differentiating twice (resulting in ) and then solving for yields
| Mathematics | Differential calculus | null |
209135 | https://en.wikipedia.org/wiki/River%20kingfisher | River kingfisher | The river kingfishers or pygmy kingfishers, subfamily Alcedininae, are one of the three subfamilies of kingfishers. The river kingfishers are widespread through Africa and east and south Asia as far as Australia, with one species, the common kingfisher (Alcedo atthis) also appearing in Europe and northern Asia. This group includes many kingfishers that actually dive for fish. The origin of the subfamily is thought to have been in Asia.
These are brightly plumaged, compact birds with short tails, large heads, and long bills. They feed on insects or fish, and lay white eggs in a self-excavated burrow. Both adults incubate the eggs and feed the chicks.
Taxonomy
A molecular phylogenetic study of the river kingfishers published in 2007 found that the genera as then defined did not form monophyletic groups. The species were subsequently rearranged into four monophyletic genera. A clade containing four species were placed in the resurrected genus Corythornis and five species (little kingfisher, azure kingfisher, Bismarck kingfisher, silvery kingfisher and indigo-banded kingfisher) were moved from Alcedo to Ceyx.
All except one of the kingfishers in the reconstituted Ceyx have three rather than the usual four toes. The exception is the Sulawesi dwarf kingfisher which retains a vestigial fourth toe.
The subfamily includes 35 species divided into four genera. The African dwarf kingfisher is sometimes placed in the monotypic genus Myioceyx, and sometimes with the pygmy kingfishers in Ispidina. Molecular analysis suggests that the Madagascar pygmy kingfisher is most closely related to the malachite kingfisher.
Description
All kingfishers are short-tailed large-headed compact birds with long pointed bills. Like other Coraciiformes, they are brightly coloured. Alcedo species typically have metallic blue upperparts and head, and orange or white underparts. The sexes may be identical, as with Bismarck kingfisher, but most species show some sexual dimorphism, ranging from a different bill colour as with common kingfisher to a completely different appearance. The male blue-banded kingfisher has white underparts with a blue breast band, whereas the female has orange underparts.
The small kingfishers that make up the rest of the family have blue or orange upperparts and white or buff underparts, and show little sexual variation. Across the family, the bill colour is linked to diet. The insectivorous species have red bills, and the fish-eaters have black bills.
When perched, kingfishers sit quite upright, and the flight is fast and direct. The call is typically a simple high-pitched squeak, often given in flight.
Distribution and habitat
Most alcedinids are found in the warm climates of Africa and southern and southeast Asia. Three species reach Australia, but only the common kingfisher is found across most of Europe and temperate Asia. No members of this family are found in the Americas, although the American green kingfishers are believed to have derived from alcedinid stock. The origin of the family is thought to have been in southern Asia, which still has the most species.
The Ceyx and Ispidina species are mainly birds of wet rainforest or other woodland, and are not necessarily associated with water. The Alcedo kingfishers are usually closely associated with fresh water, often in open habitats although some are primarily forest birds.
Behaviour
Breeding
River kingfishers are monogamous and territorial. The pair excavates a burrow in an earth bank and lays two or more white eggs onto the bare surface. Both parents incubate the eggs and feed the chicks. Egg laying is staggered at one-day intervals so that if food is short only the older larger nestlings get fed. The chicks are naked, blind and helpless when they hatch, and stand on their heels, unlike any adult bird.
Feeding
The small Ceyx and Ispidina species feed mainly on insects and spiders, but also take tadpoles, frogs and mayfly nymphs from puddles. They will flycatch, and their red bills are flattened to assist in the capture of insects. The Alcedo kingfishers are typically fish-eaters with black bills, but will also take aquatic invertebrates, spiders and lizards. A few species are mainly insectivorous and have red bills. Typically fish are caught by diving into the water from a perch, although the kingfisher might hover briefly.
| Biology and health sciences | Coraciiformes | null |
209148 | https://en.wikipedia.org/wiki/Water%20kingfisher | Water kingfisher | The water kingfishers or Cerylinae are one of the three subfamilies of kingfishers, and are also known as the cerylid kingfishers. All six American species are in this subfamily.
These are all specialist fish-eating species, unlike many representatives of the other two subfamilies, and it is likely that they are all descended from fish-eating kingfishers which founded populations in the New World. It was believed that the entire group evolved in the Americas, but this seems not to be true. The original ancestor possibly evolved in Africa – at any rate in the Old World – and the Chloroceryle species are the youngest ones.
Phylogeny
Evidence from molecular phylogenetic studies suggests that the Cerylinae originated in Asia and have colonised the New World on two occasions: the first time was around 8 million years ago by the Chloroceryle and the second time was around 1.9 million years ago by the common ancestor of the ringed kingfisher and the belted kingfisher in the genus Megaceryle.
The subfamily Cerylinae contains nine kingfisher species and is divided into three genera:
| Biology and health sciences | Coraciiformes | Animals |
209236 | https://en.wikipedia.org/wiki/Pest%20%28organism%29 | Pest (organism) | A pest is any organism harmful to humans or human concerns. The term is particularly used for creatures that damage crops, livestock, and forestry or cause a nuisance to people, especially in their homes. Humans have modified the environment for their own purposes and are intolerant of other creatures occupying the same space when their activities impact adversely on human objectives. Thus, an elephant is unobjectionable in its natural habitat but a pest when it tramples crops.
Some animals are disliked because they bite or sting; wolves, snakes, wasps, ants, bed bugs, fleas and ticks belong in this category. Others enter the home; these include houseflies, which land on and contaminate food; beetles, which tunnel into the woodwork; and other animals that scuttle about on the floor at night, like rats and cockroaches, which are often associated with unsanitary conditions.
Agricultural and horticultural crops are attacked by a wide variety of pests, the most important being rodents, insects, mites, nematodes and gastropod molluscs. The damage they do results both from the direct injury they cause to the plants and from the indirect consequences of the fungal, bacterial or viral infections they transmit. Plants have their own defences against these attacks but these may be overwhelmed, especially in habitats where the plants are already stressed, or where the pests have been accidentally introduced and may have no natural enemies. The pests affecting trees are predominantly insects, and many of these have also been introduced inadvertently and lack natural enemies, and some have transmitted novel fungal diseases with devastating results.
Humans have traditionally performed pest control in agriculture and forestry by the use of pesticides; however, other methods exist such as mechanical control, and recently developed biological controls.
Concept
A pest is any living thing which humans consider troublesome to themselves, their possessions, or the environment. Pests can cause issues with crops, human or animal health, buildings, and wild areas or larger landscapes. An older usage of the word "pest" is of a deadly epidemic disease, specifically plague. In its broadest sense, a pest is a competitor to humanity. Pests include plants, pathogens, invertebrates, vertebrates, or any organism that harms an ecosystem.
Animals
Animals are considered pests or vermin when they injure people or damage crops, forestry, or buildings. Elephants are regarded as pests by the farmers whose crops they raid and trample. Mosquitoes and ticks are vectors that can transmit ailments but are also pests because of the distress caused by their bites. Grasshoppers are usually solitary herbivores of little economic importance until the conditions are met for them to enter a swarming phase, become locusts and cause enormous damage. Many people appreciate birds in the countryside and their gardens, but when these accumulate in large masses, they can be a nuisance. Flocks of starlings can consist of hundreds of thousands of individual birds, their roosts can be noisy and their droppings voluminous; the droppings are acidic and can cause corrosion of metals, stonework, and brickwork as well as being unsightly. Pigeons in urban settings may be a health hazard, and gulls near the coast can become a nuisance, especially if they become bold enough to snatch food from passers-by. All birds are a risk at airfields where they can be sucked into aircraft engines. Woodpeckers sometimes excavate holes in buildings, fencing and utility poles, causing structural damage; they also drum on various reverberatory structures on buildings such as gutters, down-spouts, chimneys, vents and aluminium sheeting. Jellyfish can form vast swarms which may be responsible for damage to fishing gear, and sometimes clog the cooling systems of power and desalination plants which draw their water from the sea.
Many of the animals that we regard as pests live in our homes. Before humans built dwellings, these creatures lived in the wider environment, but co-evolved with humans, adapting to the warm, sheltered conditions that a house provides, the wooden timbers, the furnishings, the food supplies and the rubbish dumps. Many no longer exist as free-living organisms in the outside world, and can therefore be considered to be domesticated. The St Kilda house mouse rapidly became extinct when the last islander left the island of St Kilda, Scotland in 1930, but the St Kilda field mouse survived.
Plants
Plants may be considered pests, for example, if they are invasive species or weeds. There is no universal definition of what makes a plant a pest. Some governments, such as that of Western Australia, permit their authorities to prescribe as a pest plant "any plant that, in the local government authority's opinion, is likely to adversely affect the environment of the district, the value of property in the district, or the health, comfort or convenience of the district's inhabitants." An example of such a plant prescribed under this regulation is caltrop, Tribulus terrestris, which can cause poisoning in sheep and goats, but is mainly a nuisance around buildings, roadsides and recreation areas because of its uncomfortably sharp spiny burrs.
Pathogens
Disease-causing pathogens such as fungi, oomycetes, bacteria, and viruses can cause damage to crops and garden plants.
Ecology
The term "plant pest", mainly applied to insect micropredators of plants, has a specific definition in terms of the International Plant Protection Convention and phytosanitary measures worldwide. A pest is any species, strain or biotype of plant, animal, or pathogenic agent injurious to plants or plant products.
Worldwide, agricultural pest impacts are increased by higher degrees of interconnectedness. This is due to the increased risk that any particular pest problem anywhere in the world (as a system) will propagate across the entire system.
Plant defences against pests
Plants have developed strategies that they use in their own defence, be they thorns (modified stems) or spines (modified leaves), stings, a thick cuticle or waxy deposits, with the second line of defence being toxic or distasteful secondary metabolites. Mechanical injury to the plant tissues allows the entry of pathogens and stimulates the plant to mobilise its chemical defences. The plant soon seals off the wound to reduce further damage.
Plants sometimes take active steps to reduce herbivory. Macaranga triloba for example has adapted its thin-walled stems to create ideal housing for an ant Crematogaster spp., which, in turn, protects the plant from herbivores. In addition to providing housing, the plant also provides the ant with its exclusive food source in the form of food bodies located on the leaf stipules. Similarly, several Acacia tree species have developed stout spines that are swollen at the base, forming a hollow structure that provides housing for ants which protect the plant. These Acacia trees also produce nectar in nectaries on their leaves as food for the ants.
Climate change
Pest ranges are heavily determined by climate. The most common example for the longest time has been rainfall: Although drought stress weakens crop disease resistance, drought also retards contagion and infection; and some variability in precipitation is universal. More recently climate change has been rapidly altering ranges, mostly by pushing them towards the poles (both North and South). From 1960–2013 ranges have shifted poleward by per year - albeit with significant differences between taxa. (Especially in the case of viruses and nematodes which show the opposite trend, toward the equator. This may be due to their lack of airborne dispersal, so their trend conforms with the trend of human-aided dispersal; or identification difficulties in the field.) In Europe, crop pests are expected to burgeon as the vertebrate predators which control them are expected to be suppressed by future climatic conditions.
Economic impact
In agriculture and horticulture
Together pests and diseases cause up to 40% yield losses every year. The animal groups of the greatest importance as agricultural pests are (in order of economic importance) insects, mites, nematodes and gastropod molluscs.
Insects are responsible for two major forms of damage to crops. First, there is the direct injury they cause to the plants as they feed on the tissues; a reduction in leaf surface available for photosynthesis, distortion of growing shoots, a diminution of the plant's growth and vigour, and the wilting of shoots and branches caused by the insects' tunneling activities. Secondly there is the indirect damage, where the insects do little direct harm, but either transmit or allow entry of fungal, bacterial or viral infections. Although some insects are polyphagous, many are restricted to one specific crop, or group of crops. In many cases it is the larva that feeds on the plant, building up a nutritional store that will be used by the short-lived adult; sawfly and lepidopteran larvae feed mainly on the aerial portions of plants while beetle larvae tend to live underground, feeding on roots, or tunnel into the stem or under the bark. The true bugs, Hemiptera, have piercing and sucking mouthparts and live by sucking sap from plants. These include aphids, whiteflies and scale insects. Apart from weakening the plant, they encourage the growth of sooty mould on the honeydew the insects produce, which cuts out the light and reduces photosynthesis, stunting the plant's growth. They often transmit serious viral diseases between plants.
The mites that cause most trouble in the field are the spider mites. These are less than in diameter, can be very numerous, and thrive in hot, dry conditions. They mostly live on the underside of leaves and puncture the plant cells to feed, with some species forming webbing. They occur on nearly all important food crops and ornamental plants, both outdoors and under glass, and include some of the most economically important pests. Another important group of mites is the gall mites which affect a wide range of plants, several mite species being major pests causing substantial economic damage to crops. They can feed on the roots or the aerial parts of plants and transmit viruses. Some examples are the big bud mite that transmits the reversion virus of blackcurrants, the coconut mite which can devastate coconut production, and the cereal rust mite which transmits several grass and cereal viruses. Being exceedingly minute, many plant mites are spread by wind, although others use insects or other arthropods as a means to disperse.
The nematodes (eelworms) that attack plants are minute, often too small to be seen with the naked eye, but their presence is often apparent in the galls or "knots" they form in plant tissues. Vast numbers of nematodes are found in soil and attack roots, but others affect stems, buds, leaves, flowers and fruits. High infestations cause stunting, deformation and retardation of plant growth, and the nematodes can transmit viral diseases from one plant to another. When its populations are high, the potato cyst nematode can cause reductions of 80% in yield of susceptible potato varieties. The nematode eggs survive in the soil for many years, being stimulated to hatch by chemical cues produced by roots of susceptible plants.
Slugs and snails are terrestrial gastropod molluscs which typically chew leaves, stems, flowers, fruit and vegetable debris. Slugs and snails differ little from each other and both do considerable damage to plants. With novel crops being grown and with insect pests having been brought more under control by biological and other means, the damage done by molluscs becomes of greater significance. Terrestrial molluscs need moist environments; snails may be more noticeable because their shells provide protection from desiccation, while most slugs live in soil and only come out to feed at night. They devour seedlings, damage developing shoots and feed on salad crops and cabbages, and some species tunnel into potatoes and other tubers.
Weeds
A weed is a plant considered undesirable in a particular situation; the term has no botanical significance. Often, weeds are simply those native plants that are adapted to grow in disturbed ground, the disturbance caused by ploughing and cultivation favouring them over other species. Any plant is a weed if it appears in a location where it is unwanted; Bermuda grass makes a good lawn plant under hot dry conditions but become a bad weed when it out-competes cultivated plants.
A different group of weeds consists of those that are invasive, introduced, often unintentionally, to habitats to which they are not native but in which they thrive. Without their original competitors, herbivores, and diseases, they may increase and become a serious nuisance. One such plant is purple loosestrife, a native of Europe and Asia where it occurs in ditches, wet meadows and marshes; introduced into North America, it has no natural enemies to keep it in check and has taken over vast tracts of wetlands to the exclusion of native species.
In forestry
In forestry, pests may affect various parts of the tree, from its roots and trunk to the canopy far overhead. The accessibility of the part of the tree affected may make detection difficult, so that a pest problem may already be far advanced before it is first observed from the ground. The larch sawfly and spruce budworm are two insect pests prevalent in Alaska and aerial surveys can show which sections of forest are being defoliated in any given year so that appropriate remedial action can be taken.
Some pests may not be present on the tree all year round, either because of their life cycle or because they rotate between different host species at different times of the year. In forestry, pests may affect various parts of the tree, from its roots and trunk to the canopy far overhead. The larvae of wood-boring beetles, for example, are notorious for spending years excavating tunnels under the bark of trees, leading to significant structural damage. These larvae only emerge into the open for brief periods as adults, primarily to mate and disperse. The import and export of timber has inadvertently assisted some insect pests to establish themselves far from their country of origin. An insect may be of little importance in its native range, being kept under control by parasitoid wasps, predators, and the natural resistance of the host trees, but be a serious pest in a region into which it has been introduced. This is the case with the emerald ash borer, an insect native to north-eastern Asia, which, since its arrival in North America, has killed millions of ash trees. Another example of a beetle species that exhibits pest behavior are Melolontha Hippocastrani, that cause severe, long-term damage on young trees by feeding on roots.
In buildings
Animals able to live in the dry conditions found in buildings include many arthropods such as beetles, cockroaches, moths, mites, and silverfish. Another group, including termites, woodworm, longhorn beetles, and wood ants cause structural damage to buildings and furniture. The natural habitat of these is the decaying parts of trees. The deathwatch beetle infests the structural timbers of old buildings, mostly attacking hardwood, especially oak. The initial attack usually follows the entry of water into a building and the subsequent decay of damp timber. Furniture beetles mainly attack the sapwood of both hard and soft wood, only attacking the heartwood when it is modified by fungal decay. The presence of the beetles only becomes apparent when the larvae gnaw their way out, leaving small circular holes in the timber.
Carpet beetles and clothes moths cause non-structural damage to property such as clothing and carpets. It is the larvae that are destructive, feeding on wool, hair, fur, feathers and down. The moth larvae live where they feed, but the beetle larvae may hide behind skirting boards or in other similar locations between meals. They may be introduced to the home in any product containing animal fibres including upholstered furniture; the moths are feeble fliers but the carpet beetles may also enter houses through open windows. Furniture beetles, carpet beetles and clothes moths are also capable of creating great damage to museum exhibits, zoological and botanical collections, and other cultural heritage items. Constant vigilance is required to prevent an attack, and newly acquired items, and those that have been out on loan, may need quarantining before being added to the general collection.
There are over four thousand species of cockroach worldwide, but only four species are commonly regarded as pests, having adapted to live permanently in buildings. Considered to be a sign of unsanitary conditions, they feed on almost anything, reproduce rapidly and are difficult to eradicate. They can passively transport pathogenic microbes on their body surfaces, particularly in environments such as hospitals, and are linked with allergic reactions in humans.
Various insects attack dry food products, with flour beetles, the drugstore beetle, the sawtoothed grain beetle and the Indianmeal moth being found worldwide. The insects may be present in the warehouse or maybe introduced during shipping, in retail outlets, or in the home; they may enter packets through tiny cracks or may chew holes in the packaging. The longer a product is stored, the more likely it is to become contaminated, with the insects often originating from dry pet foods.
Some mites, too, infest foodstuffs and other stored products. Each substance has its own specific mite, and they multiply with great rapidity. One of the most damaging is the flour mite, which is found in grain and may become exceedingly abundant in poorly stored material. In time, predatory mites usually move in and control the flour mites.
Countermeasures
Pest control in agriculture and horticulture
The control of pests in crops is as old as civilisation. The earliest approach was mechanical, from ploughing to picking off insects by hand. Early methods included the use of sulphur compounds, before 2500 BC in Sumeria. In ancient China, insecticides derived from plants were in use by 1200 BC to treat seeds and to fumigate plants. Chinese agronomy recognised biological control by natural enemies of pests and the varying of planting time to reduce pests before the first century AD. The agricultural revolution in Europe saw the introduction of effective plant-based insecticides such as pyrethrum, derris, quassia, and tobacco extract. The phylloxera (a powdery mildew) damage to the wine industry in the 19th century resulted in the development of resistant varieties and grafting, and the accidental discovery of effective chemical pesticides, Bordeaux mixture (lime and copper sulphate) and Paris Green (an arsenic compound), both very widely used. Biological control also became established as an effective measure in the second half of the 19th century, starting with the vedalia beetle against cottony cushion scale. All these methods have been refined and developed since their discovery.
Pest control in forestry
Forest pests inflict costly damage, but treating them is often unaffordable, given the relatively low value of forest products compared to agricultural crops. It is also generally impossible to eradicate forest pests, given the difficulty of examining entire trees, and the certainty that pesticides would damage many forest organisms other than the intended pests. Forest integrated pest management therefore aims to use a combination of prevention, cultural control measures, and direct control (such as pesticide use). Cultural measures include choosing appropriate species, keeping competing vegetation under control, ensuring a suitable stocking density, and minimizing injury and stress to trees.
Pest control in buildings
Pest control in buildings can be approached in several ways, depending on the type of pest and the area affected. Methods include improving sanitation and garbage control, modifying the habitat, and using repellents, growth regulators, traps, baits and pesticides. For example, the pesticide Boron can be impregnated into the fibres of cellulose insulation to kill self-grooming insects such as ants and cockroaches. Clothes moths can be controlled with airtight containers for storage, periodic laundering of garments, trapping, freezing, heating and the use of chemicals. Traditional mothballs deter adult moths with strong-smelling naphthalene; modern ones use volatile repellents such as 1,4-Dichlorobenzene. Moth larvae can be killed with insecticides such as permethrin or pyrethroids. However, insecticides cannot safely be used in food storage areas; alternative treatments include freezing foods for four days at or baking for half an hour at to kill any insects present.
In mythology, religion, folklore, and culture
Pests have attracted human attention from the birth of civilisation. Plagues of locusts caused devastation in the ancient Middle East, and were recorded in tombs in Ancient Egypt from as early as 2470 BC, and in the Book of Exodus in the Bible, as taking place in Egypt around 1446 BC. Homer's Iliad mentions locusts taking to the wing to escape fire. Given the impact of agricultural pests on human lives, people have prayed for deliverance. For example, the 10th century Greek monk Tryphon of Constantinople is said to have prayed "Snails, earwigs and all other creatures, hurt not the vines, nor the land nor the fruit of the trees, nor the vegetables ... but depart into the wild mountains."
The 11th-century Old English medical text Lacnunga contained charms and spells to ward off or treat pests such as wid smeogan wyrme, "penetrating worms", in this case requiring a charm to be sung, accompanied by covering the wound with spittle, pounded green centaury, and hot cow's urine.
The 20th century "prayer against pests" including the words "By Your power may these injurious animals be driven off so that they will do no harm to any one and will leave our fields and meadows unharmed" was printed in the 1956 Rural Life Prayerbook.
| Technology | Pest and disease control | null |
209441 | https://en.wikipedia.org/wiki/Perseus%20%28constellation%29 | Perseus (constellation) | Perseus is a constellation in the northern sky, named after the Greek mythological hero Perseus. It is one of the 48 ancient constellations listed by the 2nd-century astronomer Ptolemy, and among the 88 modern constellations defined by the International Astronomical Union (IAU). It is located near several other constellations named after ancient Greek legends surrounding Perseus, including Andromeda to the west and Cassiopeia to the north. Perseus is also bordered by Aries and Taurus to the south, Auriga to the east, Camelopardalis to the north, and Triangulum to the west. Some star atlases during the early 19th century also depicted Perseus holding the disembodied head of Medusa, whose asterism was named together as Perseus et Caput Medusae; however, this never came into popular usage.
The galactic plane of the Milky Way passes through Perseus, whose brightest star is the yellow-white supergiant Alpha Persei (also called Mirfak), which shines at magnitude 1.79. It and many of the surrounding stars are members of an open cluster known as the Alpha Persei Cluster. The best-known star, however, is Algol (Beta Persei), linked with ominous legends because of its variability, which is noticeable to the naked eye. Rather than being an intrinsically variable star, it is an eclipsing binary. Other notable star systems in Perseus include X Persei, a binary system containing a neutron star, and GK Persei, a nova that peaked at magnitude 0.2 in 1901. The Double Cluster, comprising two open clusters quite near each other in the sky, was known to the ancient Chinese. The constellation gives its name to the Perseus cluster (Abell 426), a massive galaxy cluster located 250 million light-years from Earth. It hosts the radiant of the annual Perseids meteor shower—one of the most prominent meteor showers in the sky.
History and mythology
In Greek mythology, Perseus was the son of Danaë, who was sent by King Polydectes to bring the head of Medusa the Gorgon—whose visage caused all who gazed upon her to turn to stone. Perseus slew Medusa in her sleep, and Pegasus and Chrysaor appeared from her body. Perseus continued to the realm of Cepheus whose daughter Andromeda was to be sacrificed to Cetus the sea monster.
Perseus rescued Andromeda from the monster by killing it with his sword. He turned Polydectes and his followers to stone with Medusa's head and appointed Dictys the fisherman king. Perseus and Andromeda married and had six children. In the sky, Perseus lies near the constellations Andromeda, Cepheus, Cassiopeia (Andromeda's mother), Cetus, and Pegasus.
In Neo-Assyrian Babylonia (911–605 BC), the constellation of Perseus was known as the Old Man constellation (SU.GI), then associated with East in the MUL.APIN, an astronomical text from the 7th century.
In non-Western astronomy
Four Chinese constellations are contained in the area of the sky identified with Perseus in the West. Tiānchuán (天船), the Celestial Boat, was the third paranatellon (A star or constellation which rises at the same time as another star or object) of the third house of the White Tiger of the West, representing the boats that Chinese people were reminded to build in case of a catastrophic flood season. Incorporating stars from the northern part of the constellation, it contained Mu, Delta, Psi, Alpha, Gamma and Eta Persei. Jīshuǐ (積水), the Swollen Waters, was the fourth paranatellon of the aforementioned house, representing the potential of unusually high floods during the end of August and beginning of September at the beginning of the flood season. Lambda and possibly Mu Persei lay within it. Dàlíng (大陵), the Great Trench, was the fifth paranatellon of that house, representing the trenches where criminals executed en masse in August were interred. It was formed by Kappa, Omega, Rho, 24, 17 and 15 Persei. The pile of corpses prior to their interment was represented by Jīshī (積屍, Pi Persei), the sixth paranatellon of the house. The Double Cluster, h and Chi Persei, had special significance in Chinese astronomy.
In Polynesia, Perseus was not commonly recognized as a separate constellation; the only people that named it were those of the Society Islands, who called it Faa-iti, meaning "Little Valley". Algol may have been named Matohi by the Māori people, but the evidence for this identification is disputed. Matohi ("Split") occasionally came into conflict with Tangaroa-whakapau over which of them should appear in the sky, the outcome affecting the tides. It matches the Maori description of a blue-white star near Aldebaran but does not disappear as the myth would indicate.
Characteristics
Perseus is bordered by Aries and Taurus to the south, Auriga to the east, Camelopardalis and Cassiopeia to the north, and Andromeda and Triangulum to the west. Covering 615 square degrees, it ranks twenty-fourth of the 88 constellations in size. It appears prominently in the northern sky during the Northern Hemisphere's spring. Its main asterism consists of 19 stars. The constellation's boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a 26-sided polygon. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between and . The International Astronomical Union (IAU) adopted the three-letter abbreviation "Per" for the constellation in 1922.
Features
Stars
Algol (from the Arabic رأس الغول Ra's al-Ghul, which means The Demon's Head), also known by its Bayer designation Beta Persei, is the best-known star in Perseus. Representing the head of the Gorgon Medusa in Greek mythology, it was called Horus in Egyptian mythology and Rosh ha Satan ("Satan's Head") in Hebrew. Located 92.8 light-years from Earth, it varies in apparent magnitude from a minimum of 3.5 to a maximum of 2.3 over a period of 2.867 days.
The star system is the prototype of a group of eclipsing binary stars named Algol variables, though it has a third member to make up what is actually a triple star system. The brightest component is a blue-white main-sequence star of spectral type B8V, which is 3.5 times as massive and 180 times as luminous as the Sun. The secondary component is an orange subgiant star of type K0IV that has begun cooling and expanding to 3.5 times the radius of the Sun, and has 4.5 times the luminosity and 80% of its mass. These two are separated by only 0.05 astronomical units (AU)—five percent of the distance between the Earth and Sun; the main dip in brightness arises when the larger fainter companion passes in front of the hotter brighter primary. The tertiary component is a main sequence star of type A7, which is located on average 2.69 AU from the other two stars. AG Persei is another Algol variable in Perseus, whose primary component is a B-type main sequence star with an apparent magnitude of 6.69. Phi Persei is a double star, although the two components do not eclipse each other. The primary star is a Be star of spectral type B0.5, possibly a giant star, and the secondary companion is likely a stellar remnant. The secondary has a similar spectral type to O-type subdwarfs.
With the historical name Mirfak (Arabic for elbow) or Algenib, Alpha Persei is the brightest star of this constellation with an apparent magnitude of 1.79. A supergiant of spectral type F5Ib located around 590 light-years away from Earth, Mirfak has 5,000 times the luminosity and 42 times the diameter of the Sun. It is the brightest member of the Alpha Persei Cluster (also known as Melotte 20 and Collinder 39), which is an open cluster containing many luminous stars. Neighboring bright stars that are members include the Be stars Delta (magnitude 3.0), Psi (4.3), and 48 Persei (4.0); the Beta Cephei variable Epsilon Persei (2.9); and the stars 29 (5.2), 30 (5.5), 31 (5.0), and 34 Persei (4.7). Of magnitude 4.05, nearby Iota Persei has been considered a member of the group, but is actually located a mere 34 light-years distant. This star is very similar to the Sun, shining with 2.2 times its luminosity. It is a yellow main sequence star of spectral type G0V. Extensive searches have failed to find evidence of it having a planetary system.
Zeta Persei is the third-brightest star in the constellation at magnitude 2.86. Around 750 light-years from Earth, it is a blue-white supergiant 26–27 times the radius of the Sun and 47,000 times its luminosity. It is the brightest star (as seen from Earth) of a moving group of bright blue-white giant and supergiant stars, the Perseus OB2 Association or Zeta Persei Association. Zeta is a triple star system, with a companion blue-white main sequence star of spectral type B8 and apparent magnitude 9.16 around 3,900 AU distant from the primary, and a white main sequence star of magnitude 9.90 and spectral type A2, some 50,000 AU away, that may or may not be gravitationally bound to the other two. X Persei is a double system in this association; one component is a hot, bright star and the other is a neutron star. With an apparent magnitude of 6.72, it is too dim to be seen with the naked eye even in perfectly dark conditions. The system is an X-ray source and the primary star appears to be undergoing substantial mass loss. Once thought to be a member of the Perseus OB2 Association, Omicron Persei (Atik) is a multiple star system with a combined visual magnitude of 3.85. It is composed of two blue-white stars—a giant of spectral class B1.5 and main sequence star of B3—which orbit each other every 4.5 days and are distorted into ovoids due to their small separation. The system has a third star about which little is known. At an estimated distance of 1,475 light-years from Earth, the system is now thought to lie too far from the center of the Zeta Persei group to belong to it.
GRO J0422+32 (V518 Persei) is another X-ray binary in Perseus. One component is a red dwarf star of spectral type M4.5V, which orbits a mysterious dense and heavy object—possibly a black hole—every 5.1 hours. The system is an X-ray nova, meaning that it experiences periodic outbursts in the X-ray band of the electromagnetic spectrum. If the system does indeed contain a black hole, it would be the smallest one ever recorded. Further analysis in 2012 calculated a mass of 2.1 solar masses, which raises questions as to what the object actually is as it appears to be too small to be a black hole.
GK Persei, also known as Nova Persei 1901, is a bright nova that appeared halfway between Algol and Delta Persei. Discovered on 21 February 1901 by Scottish amateur astronomer Thomas David Anderson, it peaked at magnitude 0.2—almost as bright as Capella and Vega. It faded to magnitude 13 around 30 years after its peak brightness. Xi Persei, traditionally known as Menkhib, a blue giant of spectral type O7III, is one of the hottest bright stars in the sky, with a surface temperature of 37,500 K. It is one of the more massive stars, being between 26 and 32 solar masses, and is 330,000 times as luminous as the Sun.
Named Gorgonea Tertia, Rho Persei varies in brightness like Algol, but is a pulsating rather than eclipsing star. At an advanced stage of stellar evolution, it is a red giant that has expanded for the second time to have a radius around 150 times that of the Sun. Its helium has been fused into heavier elements and its core is composed of carbon and oxygen. It is a semiregular variable star of the Mu Cephei type, whose apparent magnitude varies between 3.3 and 4.0 with periods of 50, 120 and 250 days. The Double Cluster contains three even larger stars, each over 700 solar radii: S, RS, and SU Persei are all semiregular pulsating M-type supergiants. The stars are not visible to the naked eye; SU Persei, the brightest of the three, has an apparent magnitude of 7.9 and thus is visible through binoculars. AX Persei is another binary star, the primary component is a red giant in an advanced phase of stellar evolution, which is transferring material onto an accretion disc around a smaller star. The star system is one of the few eclipsing symbiotic binaries, but is unusual because the secondary star is not a white dwarf, but an A-type star. DY Persei is a variable star that is the prototype of DY Persei variables, which are carbon-rich R Coronae Borealis variables that exhibit the variability of asymptotic giant branch stars. DY Persei itself is a carbon star that is too dim to see through binoculars, with an apparent magnitude of 10.6.
Seven stars in Perseus have been found to have planetary systems. V718 Persei is a star in the young open cluster IC 348 that appears to be periodically eclipsed by a giant planet every 4.7 years. This has been inferred to be an object with a maximum mass of 6 times that of Jupiter and an orbital radius of 3.3 AU.
Deep-sky objects
The galactic plane of the Milky Way passes through Perseus, but is much less obvious than elsewhere in the sky as it is mostly obscured by molecular clouds. The Perseus Arm is a spiral arm of the Milky Way galaxy and stretches across the sky from the constellation Cassiopeia through Perseus and Auriga to Gemini and Monoceros. This segment is towards the rim of the galaxy.
Within the Perseus Arm lie two open clusters (NGC 869 and NGC 884) known as the Double Cluster. Sometimes known as h and Chi (χ) Persei, respectively, they are easily visible through binoculars and small telescopes. Both lie more than 7,000 light-years from Earth and are several hundred light-years apart. Both clusters are of approximately magnitude 4 and 0.5 degrees in diameter. The two are Trumpler class I 3 r clusters, though NGC 869 is a Shapley class f and NGC 884 is a Shapley class e cluster. These classifications indicate that they are both quite rich (dense); NGC 869 is the richer of the pair. The clusters are both distinct from the surrounding star field and are clearly concentrated at their centers. The constituent stars, numbering over 100 in each cluster, range widely in brightness.
M34 is an open cluster that appears at magnitude 5.5, and is approximately 1,500 light-years from Earth. It contains about 100 stars scattered over a field of view larger than that of the full moon. M34 can be resolved with good eyesight but is best viewed using a telescope at low magnifications. IC 348 is a somewhat young open cluster that is still contained within the nebula from which its stars formed. It is located about 1,027 light-years from Earth, is about 2 million years old, and contains many stars with circumstellar disks. Many brown dwarfs have been discovered in this cluster due to its age; since brown dwarfs cool as they age, it is easier to find them in younger clusters.
There are many nebulae in Perseus. M76 is a planetary nebula, also called the Little Dumbbell Nebula. It appears two arc-minutes by one arc-minute across and has an apparent brightness of magnitude 10.1. NGC 1499, also known as the California Nebula, is an emission nebula that was discovered in 1884–85 by American astronomer Edward E. Barnard. It is very difficult to observe visually because its low surface brightness makes it appear dimmer than most other emission nebulae. NGC 1333 is a reflection nebula and a star-forming region. Perseus also contains a giant molecular cloud, called the Perseus molecular cloud; it belongs to the Orion Spur and is known for its low rate of star formation compared to similar clouds.
Perseus contains some notable galaxies. NGC 1023 is a barred spiral galaxy of magnitude 10.35, around from Earth. It is the principal member of the NGC 1023 group of galaxies and is possibly interacting with another galaxy. NGC 1260 is either a lenticular or tightly wound spiral galaxy about from Earth. It was the host galaxy of the supernova SN 2006gy, one of the brightest ever recorded. It is a member of the Perseus Cluster (Abell 426), a massive galaxy cluster located from Earth. With a redshift of 0.0179, Abell 426 is the closest major cluster to the Earth. NGC 1275, a component of the cluster, is a Seyfert galaxy containing an active nucleus that produces jets of material, surrounding the galaxy with massive bubbles. These bubbles create sound waves that travel through the Perseus Cluster, sounding a B flat 57 octaves below middle C. This galaxy is a cD galaxy that has undergone many galactic mergers throughout its existence, as evidenced by the "high velocity system"—the remnants of a smaller galaxy—surrounding it. Its active nucleus is a strong source of radio waves. 3C 31 is an active galaxy and radio source in Perseus 237 million light-years from Earth (redshift 0.0173). Its jets, caused by the supermassive black hole at its center, extend several million light-years in opposing directions, making them some of the largest objects in the universe.
Meteor showers
The Perseids are a prominent annual meteor shower that appear to radiate from Perseus from mid-July, peaking in activity between 9 and 14 August each year. Associated with Comet Swift–Tuttle, they have been observed for about 2,000 years. The September Epsilon Perseids, discovered in 2012, are a meteor shower with an unknown parent body in the Oort cloud.
| Physical sciences | Other | Astronomy |
209455 | https://en.wikipedia.org/wiki/Thylakoid | Thylakoid | Thylakoids are membrane-bound compartments inside chloroplasts and cyanobacteria. They are the site of the light-dependent reactions of photosynthesis. Thylakoids consist of a thylakoid membrane surrounding a thylakoid lumen. Chloroplast thylakoids frequently form stacks of disks referred to as grana (singular: granum). Grana are connected by intergranal or stromal thylakoids, which join granum stacks together as a single functional compartment.
In thylakoid membranes, chlorophyll pigments are found in packets called quantasomes. Each quantasome contains 230 to 250 chlorophyll molecules.
Etymology
The word Thylakoid comes from the Greek word thylakos or θύλακος, meaning "sac" or "pouch". Thus, thylakoid means "sac-like" or "pouch-like".
Structure
Thylakoids are membrane-bound structures embedded in the chloroplast stroma. A stack of thylakoids is called a granum and resembles a stack of coins.
Membrane
The thylakoid membrane is the site of the light-dependent reactions of photosynthesis with the photosynthetic pigments embedded directly in the membrane. It is an alternating pattern of dark and light bands measuring each 1 nanometre. The thylakoid lipid bilayer shares characteristic features with prokaryotic membranes and the inner chloroplast membrane. For example, acidic lipids can be found in thylakoid membranes, cyanobacteria and other photosynthetic bacteria and are involved in the functional integrity of the photosystems. The thylakoid membranes of higher plants are composed primarily of phospholipids and galactolipids that are asymmetrically arranged along and across the membranes. Thylakoid membranes are richer in galactolipids rather than phospholipids; also they predominantly consist of hexagonal phase II forming monogalacotosyl diglyceride lipid. Despite this unique composition, plant thylakoid membranes have been shown to assume largely lipid-bilayer dynamic organization. Lipids forming the thylakoid membranes, richest in high-fluidity linolenic acid are synthesized in a complex pathway involving exchange of lipid precursors between the endoplasmic reticulum and inner membrane of the plastid envelope and transported from the inner membrane to the thylakoids via vesicles.
Lumen
The thylakoid lumen is a continuous aqueous phase enclosed by the thylakoid membrane. It plays an important role for photophosphorylation during photosynthesis. During the light-dependent reaction, protons are pumped across the thylakoid membrane into the lumen making it acidic down to pH 4.
Granum and stroma lamellae
In higher plants thylakoids are organized into a granum-stroma membrane assembly. A granum (plural grana) is a stack of thylakoid discs. Chloroplasts can have from 10 to 100 grana. Grana are connected by stroma thylakoids, also called intergranal thylakoids or lamellae. Grana thylakoids and stroma thylakoids can be distinguished by their different protein composition. Grana contribute to chloroplasts' large surface area to volume ratio. A recent electron tomography study of the thylakoid membranes has shown that the stroma lamellae are organized in wide sheets perpendicular to the grana stack axis and form multiple right-handed helical surfaces at the granal interface. Left-handed helical surfaces consolidate between the right-handed helices and sheets. This complex network of alternating helical membrane surfaces of different radii and pitch was shown to minimize the surface and bending energies of the membranes. This new model, the most extensive one generated to date, revealed that features from two, seemingly contradictory, older models coexist in the structure. Notably, similar arrangements of helical elements of alternating handedness, often referred to as "parking garage" structures, were proposed to be present in the endoplasmic reticulum and in ultradense nuclear matter. This structural organization may constitute a fundamental geometry for connecting between densely packed layers or sheets.
Formation
Chloroplasts develop from proplastids when seedlings emerge from the ground. Thylakoid formation requires light. In the plant embryo and in the absence of light, proplastids develop into etioplasts that contain semicrystalline membrane structures called prolamellar bodies. When exposed to light, these prolamellar bodies develop into thylakoids. This does not happen in seedlings grown in the dark, which undergo etiolation. An underexposure to light can cause the thylakoids to fail. This causes the chloroplasts to fail resulting to the death of the plant.
Thylakoid formation requires the action of vesicle-inducing protein in plastids 1 (VIPP1). Plants cannot survive without this protein, and reduced VIPP1 levels lead to slower growth and paler plants with reduced ability to photosynthesize. VIPP1 appears to be required for basic thylakoid membrane formation, but not for the assembly of protein complexes of the thylakoid membrane. It is conserved in all organisms containing thylakoids, including cyanobacteria, green algae, such as Chlamydomonas, and higher plants, such as Arabidopsis thaliana.
Isolation and fractionation
Thylakoids can be purified from plant cells using a combination of differential and gradient centrifugation. Disruption of isolated thylakoids, for example by mechanical shearing, releases the lumenal fraction. Peripheral and integral membrane fractions can be extracted from the remaining membrane fraction. Treatment with sodium carbonate (Na2CO3) detaches peripheral membrane proteins, whereas treatment with detergents and organic solvents solubilizes integral membrane proteins.
Proteins
Thylakoids contain many integral and peripheral membrane proteins, as well as lumenal proteins. Recent proteomics studies of thylakoid fractions have provided further details on the protein composition of the thylakoids. These data have been summarized in several plastid protein databases that are available online.
According to these studies, the thylakoid proteome consists of at least 335 different proteins. Out of these, 89 are in the lumen, 116 are integral membrane proteins, 62 are peripheral proteins on the stroma side, and 68 peripheral proteins on the lumenal side. Additional low-abundance lumenal proteins can be predicted through computational methods. Of the thylakoid proteins with known functions, 42% are involved in photosynthesis. The next largest functional groups include proteins involved in protein targeting, processing and folding with 11%, oxidative stress response (9%) and translation (8%).
Integral membrane proteins
Thylakoid membranes contain integral membrane proteins which play an important role in light-harvesting and the light-dependent reactions of photosynthesis. There are four major protein complexes in the thylakoid membrane:
Photosystems I and II
Cytochrome b6f complex
ATP synthase
Photosystem II is located mostly in the grana thylakoids, whereas photosystem I and ATP synthase are mostly located in the stroma thylakoids and the outer layers of grana. The cytochrome b6f complex is distributed evenly throughout thylakoid membranes. Due to the separate location of the two photosystems in the thylakoid membrane system, mobile electron carriers are required to shuttle electrons between them. These carriers are plastoquinone and plastocyanin. Plastoquinone shuttles electrons from photosystem II to the cytochrome b6f complex, whereas plastocyanin carries electrons from the cytochrome b6f complex to photosystem I.
Together, these proteins make use of light energy to drive electron transport chains that generate a chemiosmotic potential across the thylakoid membrane and NADPH, a product of the terminal redox reaction. The ATP synthase uses the chemiosmotic potential to make ATP during photophosphorylation.
Photosystems
These photosystems are light-driven redox centers, each consisting of an antenna complex that uses chlorophylls and accessory photosynthetic pigments such as carotenoids and phycobiliproteins to harvest light at a variety of wavelengths. Each antenna complex has between 250 and 400 pigment molecules and the energy they absorb is shuttled by resonance energy transfer to a specialized chlorophyll a at the reaction center of each photosystem. When either of the two chlorophyll a molecules at the reaction center absorb energy, an electron is excited and transferred to an electron-acceptor molecule. Photosystem I contains a pair of chlorophyll a molecules, designated P700, at its reaction center that maximally absorbs 700 nm light. Photosystem II contains P680 chlorophyll that absorbs 680 nm light best (note that these wavelengths correspond to deep red – see the visible spectrum). The P is short for pigment and the number is the specific absorption peak in nanometers for the chlorophyll molecules in each reaction center. This is the green pigment present in plants that is not visible to unaided eyes.
Cytochrome b6f complex
The cytochrome b6f complex is part of the thylakoid electron transport chain and couples electron transfer to the pumping of protons into the thylakoid lumen. Energetically, it is situated between the two photosystems and transfers electrons from photosystem II-plastoquinone to plastocyanin-photosystem I.
ATP synthase
The thylakoid ATP synthase is a CF1FO-ATP synthase similar to the mitochondrial ATPase. It is integrated into the thylakoid membrane with the CF1-part sticking into the stroma. Thus, ATP synthesis occurs on the stromal side of the thylakoids where the ATP is needed for the light-independent reactions of photosynthesis.
Lumen proteins
The electron transport protein plastocyanin is present in the lumen and shuttles electrons from the cytochrome b6f protein complex to photosystem I. While plastoquinones are lipid-soluble and therefore move within the thylakoid membrane, plastocyanin moves through the thylakoid lumen.
The lumen of the thylakoids is also the site of water oxidation by the oxygen evolving complex associated with the lumenal side of photosystem II.
Lumenal proteins can be predicted computationally based on their targeting signals. In Arabidopsis, out of the predicted lumenal proteins possessing the Tat signal, the largest groups with known functions are 19% involved in protein processing (proteolysis and folding), 18% in photosynthesis, 11% in metabolism, and 7% redox carriers and defense.
Protein expression
Chloroplasts have their own genome, which encodes a number of thylakoid proteins. However, during the course of plastid evolution from their cyanobacterial endosymbiotic ancestors, extensive gene transfer from the chloroplast genome to the cell nucleus took place. This results in the four major thylakoid protein complexes being encoded in part by the chloroplast genome and in part by the nuclear genome. Plants have developed several mechanisms to co-regulate the expression of the different subunits encoded in the two different organelles to assure the proper stoichiometry and assembly of these protein complexes. For example, transcription of nuclear genes encoding parts of the photosynthetic apparatus is regulated by light. Biogenesis, stability and turnover of thylakoid protein complexes are regulated by phosphorylation via redox-sensitive kinases in the thylakoid membranes. The translation rate of chloroplast-encoded proteins is controlled by the presence or absence of assembly partners (control by epistasy of synthesis). This mechanism involves negative feedback through binding of excess protein to the 5' untranslated region of the chloroplast mRNA. Chloroplasts also need to balance the ratios of photosystem I and II for the electron transfer chain. The redox state of the electron carrier plastoquinone in the thylakoid membrane directly affects the transcription of chloroplast genes encoding proteins of the reaction centers of the photosystems, thus counteracting imbalances in the electron transfer chain.
Protein targeting to the thylakoids
Thylakoid proteins are targeted to their destination via signal peptides and prokaryotic-type secretory pathways inside the chloroplast. Most thylakoid proteins encoded by a plant's nuclear genome need two targeting signals for proper localization: An N-terminal chloroplast targeting peptide (shown in yellow in the figure), followed by a thylakoid targeting peptide (shown in blue). Proteins are imported through the translocon of the outer and inner membrane (Toc and Tic) complexes. After entering the chloroplast, the first targeting peptide is cleaved off by a protease processing imported proteins. This unmasks the second targeting signal and the protein is exported from the stroma into the thylakoid in a second targeting step. This second step requires the action of protein translocation components of the thylakoids and is energy-dependent. Proteins are inserted into the membrane via the SRP-dependent pathway (1), the Tat-dependent pathway (2), or spontaneously via their transmembrane domains (not shown in the figure). Lumenal proteins are exported across the thylakoid membrane into the lumen by either the Tat-dependent pathway (2) or the Sec-dependent pathway (3) and released by cleavage from the thylakoid targeting signal. The different pathways utilize different signals and energy sources. The Sec (secretory) pathway requires ATP as an energy source and consists of SecA, which binds to the imported protein and a Sec membrane complex to shuttle the protein across. Proteins with a twin arginine motif in their thylakoid signal peptide are shuttled through the Tat (twin arginine translocation) pathway, which requires a membrane-bound Tat complex and the pH gradient as an energy source. Some other proteins are inserted into the membrane via the SRP (signal recognition particle) pathway. The chloroplast SRP can interact with its target proteins either post-translationally or co-translationally, thus transporting imported proteins as well as those that are translated inside the chloroplast. The SRP pathway requires GTP and the pH gradient as energy sources. Some transmembrane proteins may also spontaneously insert into the membrane from the stromal side without energy requirement.
Function
The thylakoids are the site of the light-dependent reactions of photosynthesis. These include light-driven water oxidation and oxygen evolution, the pumping of protons across the thylakoid membranes coupled with the electron transport chain of the photosystems and cytochrome complex, and ATP synthesis by the ATP synthase utilizing the generated proton gradient.
Water photolysis
The first step in photosynthesis is the light-driven reduction (splitting) of water to provide the electrons for the photosynthetic electron transport chains as well as protons for the establishment of a proton gradient. The water-splitting reaction occurs on the lumenal side of the thylakoid membrane and is driven by the light energy captured by the photosystems. This oxidation of water conveniently produces the waste product O2 that is vital for cellular respiration. The molecular oxygen formed by the reaction is released into the atmosphere.
Electron transport chains
Two different variations of electron transport are used during photosynthesis:
Noncyclic electron transport or non-cyclic photophosphorylation produces NADPH + H+ and ATP.
Cyclic electron transport or cyclic photophosphorylation produces only ATP.
The noncyclic variety involves the participation of both photosystems, while the cyclic electron flow is dependent on only photosystem I.
Photosystem I uses light energy to reduce NADP+ to NADPH + H+, and is active in both noncyclic and cyclic electron transport. In cyclic mode, the energized electron is passed down a chain that ultimately returns it (in its base state) to the chlorophyll that energized it.
Photosystem II uses light energy to oxidize water molecules, producing electrons (e−), protons (H+), and molecular oxygen (O2), and is only active in noncyclic transport. Electrons in this system are not conserved, but are rather continually entering from oxidized 2H2O (O2 + 4 H+ + 4 e−) and exiting with NADP+ when it is finally reduced to NADPH.
Chemiosmosis
A major function of the thylakoid membrane and its integral photosystems is the establishment of chemiosmotic potential. The carriers in the electron transport chain use some of the electron's energy to actively transport protons from the stroma to the lumen. During photosynthesis, the lumen becomes acidic, as low as pH 4, compared to pH 8 in the stroma. This represents a 10,000 fold concentration gradient for protons across the thylakoid membrane.
Source of proton gradient
The protons in the lumen come from three primary sources.
Photolysis by photosystem II oxidises water to oxygen, protons and electrons in the lumen.
The transfer of electrons from photosystem II to plastoquinone during non-cyclic electron transport consumes two protons from the stroma. These are released in the lumen when the reduced plastoquinol is oxidized by the cytochrome b6f protein complex on the lumen side of the thylakoid membrane. From the plastoquinone pool, electrons pass through the cytochrome b6f complex. This integral membrane assembly resembles cytochrome bc1.
The reduction of plastoquinone by ferredoxin during cyclic electron transport also transfers two protons from the stroma to the lumen.
The proton gradient is also caused by the consumption of protons in the stroma to make NADPH from NADP+ at the NADP reductase.
ATP generation
The molecular mechanism of ATP (Adenosine triphosphate) generation in chloroplasts is similar to that in mitochondria and takes the required energy from the proton motive force (PMF). However, chloroplasts rely more on the chemical potential of the PMF to generate the potential energy required for ATP synthesis. The PMF is the sum of a proton chemical potential (given by the proton concentration gradient) and a transmembrane electrical potential (given by charge separation across the membrane). Compared to the inner membranes of mitochondria, which have a significantly higher membrane potential due to charge separation, thylakoid membranes lack a charge gradient. To compensate for this, the 10,000 fold proton concentration gradient across the thylakoid membrane is much higher compared to a 10 fold gradient across the inner membrane of mitochondria. The resulting chemiosmotic potential between the lumen and stroma is high enough to drive ATP synthesis using the ATP synthase. As the protons travel back down the gradient through channels in ATP synthase, ADP + Pi are combined into ATP. In this manner, the light-dependent reactions are coupled to the synthesis of ATP via the proton gradient.
Thylakoid membranes in cyanobacteria
Cyanobacteria are photosynthetic prokaryotes with highly differentiated membrane systems. Cyanobacteria have an internal system of thylakoid membranes where the fully functional electron transfer chains of photosynthesis and respiration reside. The presence of different membrane systems lends these cells a unique complexity among bacteria. Cyanobacteria must be able to reorganize the membranes, synthesize new membrane lipids, and properly target proteins to the correct membrane system. The outer membrane, plasma membrane, and thylakoid membranes each have specialized roles in the cyanobacterial cell. Understanding the organization, functionality, protein composition, and dynamics of the membrane systems remains a great challenge in cyanobacterial cell biology.
In contrast to the thylakoid network of higher plants, which is differentiated into grana and stroma lamellae, the thylakoids in cyanobacteria are organized into multiple concentric shells that split and fuse to parallel layers forming a highly connected network. This results in a continuous network that encloses a single lumen (as in higher‐plant chloroplasts) and allows water‐soluble and lipid‐soluble molecules to diffuse through the entire membrane network. Moreover, perforations are often observed within the parallel thylakoid sheets. These gaps in the membrane allow for the traffic of particles of different sizes throughout the cell, including ribosomes, glycogen granules, and lipid bodies. The relatively large distance between the thylakoids provides space for the external light-harvesting antennae, the phycobilisomes. This macrostructure, as in the case of higher plants, shows some flexibility during changes in the physicochemical environment.
| Biology and health sciences | Plant cells | Biology |
209546 | https://en.wikipedia.org/wiki/Coalsack%20Nebula | Coalsack Nebula | The Coalsack Nebula (Southern Coalsack, or simply the Coalsack) is a dark nebula, which is visible to the naked eye as a dark patch obscuring part of the Milky Way east of Acrux (Alpha Crucis) in the constellation of Crux.
General information
Historically any other dark cloud in the night sky was called coalsack. The Coalsack Nebula was juxtaposed in 1899 by Richard Hinckley Allen through naming the Northern Coalsack Nebula.
The Coalsack Nebula covers nearly 7° by 5° and extends into the neighboring constellations Centaurus and Musca. The first observation was reported by Vicente Yáñez Pinzón in 1499. It was named "il Canopo fosco" (the dark Canopus) by Amerigo Vespucci and was also called "Macula Magellani" (Magellan's Spot) or "Black Magellanic Cloud" in opposition to the Magellanic Clouds.
In Australian Aboriginal astronomy, the Coalsack forms the head of the emu in the sky in several Aboriginal cultures. Amongst the Wardaman people, it is said to be the head and shoulders of a law-man watching the people to ensure they do not break traditional law. According to a legend reported by W. E. Harney, this being is called Utdjungon and only adherence to the tribal law by surviving tribe members could prevent him from destroying the world with a fiery star. There is also a reference by Gaiarbau (1880) regarding the coalsacks replicating bora rings on Earth. These astronomical sites allowed the spirits to continue ceremony similar to their human counterparts on Earth. As bora grounds are generally located on the compass points north–south, the southern coal sack indicates the ceremonial ring.
In Inca astronomy this nebula was called Yutu, after a partridge-like South American bird, or Tinamou.
In Popular Culture
The Coalsack Nebula appeared in Kenji Miyazawa's story Night on the Galactic Railroad, where the protagonist, Giovanni, experiences his friend, Campanella, departing to his personal afterlife referred to as "True Heaven" where he sees his dead mother waiting for him. Giovanni is unable to see True Heaven, and is represented, instead, as the empty, black Coalsack to Giovanni.
The Coalsack Nebula and the galactic area surrounding it played a large role in Jerry Pournelle's CoDominium Universe, particularly The Mote in God's Eye and the sequel The Gripping Hand, both co-authored with Larry Niven.
In these novels, a human-colonized system, New Caledonia, is on the opposite side of the Coalsack from Earth. Set against the Coalsack is a red supergiant, and between the supergiant and New Caledonia is a yellow F6 star, known as "The Mote in God's Eye".
| Physical sciences | Notable nebulae | Astronomy |
209841 | https://en.wikipedia.org/wiki/Rynchops | Rynchops | The skimmers, forming the genus Rynchops, are tern-like birds in the family Laridae. The genus comprises three species found in South Asia, Africa, and the Americas. They were formerly known as the scissorbills.
Description
The three species are the only birds with distinctive uneven bills, where the lower mandible is longer than the upper. This remarkable adaptation allows them to fish in a unique way, flying low and fast over streams. Their lower mandible skims or slices over the water's surface, ready to snap shut any small fish unable to dart clear. The skimmers are sometimes included within the gull family Laridae but separated in other treatments which consider them as a sister group of the terns. The black skimmer has an additional adaptation and is the only species of bird known to have slit-shaped pupils. the forehead, ends of the secondaries, tail feathers and under parts are white, the rest of the plumage is black and the basal half of the bill is crimson. Their bills fall within their field of binocular vision, which enables them to carefully position their bill and capture prey. They are agile in flight and gather in large flocks along rivers and coastal sand banks.
They are tropical and subtropical species which lay 3–6 eggs on sandy beaches. The female incubates the eggs. Because of the species' restricted nesting habitat the three species are vulnerable to disturbance at their nesting sites. One species, the Indian skimmer, is considered endangered by the IUCN due to this as well as destruction and degradation of the lakes and rivers it uses for feeding.
Taxonomy
The genus Rynchops was introduced in 1758 by the Swedish naturalist Carl Linnaeus in the tenth edition of his Systema Naturae. The genus name Rynchops is from the Ancient Greek ῥυνχος/rhunkhos meaning "bill" and κοπτω/koptō meaning "to cut off". The type species is the black skimmer (Rynchops niger).
As in later editions of the works of Linnaeus, the correct spelling (from the Greek words and , together meaning "beak-face") should be rhynchops and this is often adopted. However, the misspelling rynchops was the one first published by Linnaeus and continues to be more commonly used. Similarly, the gender of the Greek and Roman words is feminine and the genus was originally treated as such (R. nigra) but Rynchops is now usually treated as a masculine noun (R. niger).
Species
The genus contains three species.
| Biology and health sciences | Charadriiformes | Animals |
209874 | https://en.wikipedia.org/wiki/Density%20functional%20theory | Density functional theory | Density functional theory (DFT) is a computational quantum mechanical modelling method used in physics, chemistry and materials science to investigate the electronic structure (or nuclear structure) (principally the ground state) of many-body systems, in particular atoms, molecules, and the condensed phases. Using this theory, the properties of a many-electron system can be determined by using functionals - that is, functions that accept a function as input and output a single real number. In the case of DFT, these are functionals of the spatially dependent electron density. DFT is among the most popular and versatile methods available in condensed-matter physics, computational physics, and computational chemistry.
DFT has been very popular for calculations in solid-state physics since the 1970s. However, DFT was not considered accurate enough for calculations in quantum chemistry until the 1990s, when the approximations used in the theory were greatly refined to better model the exchange and correlation interactions. Computational costs are relatively low when compared to traditional methods, such as exchange only Hartree–Fock theory and its descendants that include electron correlation. Since, DFT has become an important tool for methods of nuclear spectroscopy such as Mössbauer spectroscopy or perturbed angular correlation, in order to understand the origin of specific electric field gradients in crystals.
Despite recent improvements, there are still difficulties in using density functional theory to properly describe: intermolecular interactions (of critical importance to understanding chemical reactions), especially van der Waals forces (dispersion); charge transfer excitations; transition states, global potential energy surfaces, dopant interactions and some strongly correlated systems; and in calculations of the band gap and ferromagnetism in semiconductors. The incomplete treatment of dispersion can adversely affect the accuracy of DFT (at least when used alone and uncorrected) in the treatment of systems which are dominated by dispersion (e.g. interacting noble gas atoms) or where dispersion competes significantly with other effects (e.g. in biomolecules). The development of new DFT methods designed to overcome this problem, by alterations to the functional or by the inclusion of additive terms, is a current research topic. Classical density functional theory uses a similar formalism to calculate the properties of non-uniform classical fluids.
Despite the current popularity of these alterations or of the inclusion of additional terms, they are reported to stray away from the search for the exact functional. Further, DFT potentials obtained with adjustable parameters are no longer true DFT potentials, given that they are not functional derivatives of the exchange correlation energy with respect to the charge density. Consequently, it is not clear if the second theorem of DFT holds in such conditions.
Overview of method
In the context of computational materials science, ab initio (from first principles) DFT calculations allow the prediction and calculation of material behavior on the basis of quantum mechanical considerations, without requiring higher-order parameters such as fundamental material properties. In contemporary DFT techniques the electronic structure is evaluated using a potential acting on the system's electrons. This DFT potential is constructed as the sum of external potentials , which is determined solely by the structure and the elemental composition of the system, and an effective potential , which represents interelectronic interactions. Thus, a problem for a representative supercell of a material with electrons can be studied as a set of one-electron Schrödinger-like equations, which are also known as Kohn–Sham equations.
Origins
Although density functional theory has its roots in the Thomas–Fermi model for the electronic structure of materials, DFT was first put on a firm theoretical footing by Walter Kohn and Pierre Hohenberg in the framework of the two Hohenberg–Kohn theorems (HK). The original HK theorems held only for non-degenerate ground states in the absence of a magnetic field, although they have since been generalized to encompass these.
The first HK theorem demonstrates that the ground-state properties of a many-electron system are uniquely determined by an electron density that depends on only three spatial coordinates. It set down the groundwork for reducing the many-body problem of electrons with spatial coordinates to three spatial coordinates, through the use of functionals of the electron density. This theorem has since been extended to the time-dependent domain to develop time-dependent density functional theory (TDDFT), which can be used to describe excited states.
The second HK theorem defines an energy functional for the system and proves that the ground-state electron density minimizes this energy functional.
In work that later won them the Nobel prize in chemistry, the HK theorem was further developed by Walter Kohn and Lu Jeu Sham to produce Kohn–Sham DFT (KS DFT). Within this framework, the intractable many-body problem of interacting electrons in a static external potential is reduced to a tractable problem of noninteracting electrons moving in an effective potential. The effective potential includes the external potential and the effects of the Coulomb interactions between the electrons, e.g., the exchange and correlation interactions. Modeling the latter two interactions becomes the difficulty within KS DFT. The simplest approximation is the local-density approximation (LDA), which is based upon exact exchange energy for a uniform electron gas, which can be obtained from the Thomas–Fermi model, and from fits to the correlation energy for a uniform electron gas. Non-interacting systems are relatively easy to solve, as the wavefunction can be represented as a Slater determinant of orbitals. Further, the kinetic energy functional of such a system is known exactly. The exchange–correlation part of the total energy functional remains unknown and must be approximated.
Another approach, less popular than KS DFT but arguably more closely related to the spirit of the original HK theorems, is orbital-free density functional theory (OFDFT), in which approximate functionals are also used for the kinetic energy of the noninteracting system.
Derivation and formalism
As usual in many-body electronic structure calculations, the nuclei of the treated molecules or clusters are seen as fixed (the Born–Oppenheimer approximation), generating a static external potential , in which the electrons are moving. A stationary electronic state is then described by a wavefunction satisfying the many-electron time-independent Schrödinger equation
where, for the -electron system, is the Hamiltonian, is the total energy, is the kinetic energy, is the potential energy from the external field due to positively charged nuclei, and is the electron–electron interaction energy. The operators and are called universal operators, as they are the same for any -electron system, while is system-dependent. This complicated many-particle equation is not separable into simpler single-particle equations because of the interaction term .
There are many sophisticated methods for solving the many-body Schrödinger equation based on the expansion of the wavefunction in Slater determinants. While the simplest one is the Hartree–Fock method, more sophisticated approaches are usually categorized as post-Hartree–Fock methods. However, the problem with these methods is the huge computational effort, which makes it virtually impossible to apply them efficiently to larger, more complex systems.
Here DFT provides an appealing alternative, being much more versatile, as it provides a way to systematically map the many-body problem, with , onto a single-body problem without . In DFT the key variable is the electron density , which for a normalized is given by
This relation can be reversed, i.e., for a given ground-state density it is possible, in principle, to calculate the corresponding ground-state wavefunction . In other words, is a unique functional of ,
and consequently the ground-state expectation value of an observable is also a functional of :
In particular, the ground-state energy is a functional of :
where the contribution of the external potential can be written explicitly in terms of the ground-state density :
More generally, the contribution of the external potential can be written explicitly in terms of the density :
The functionals and are called universal functionals, while is called a non-universal functional, as it depends on the system under study. Having specified a system, i.e., having specified , one then has to minimize the functional
with respect to , assuming one has reliable expressions for and . A successful minimization of the energy functional will yield the ground-state density and thus all other ground-state observables.
The variational problems of minimizing the energy functional can be solved by applying the Lagrangian method of undetermined multipliers. First, one considers an energy functional that does not explicitly have an electron–electron interaction energy term,
where denotes the kinetic-energy operator, and is an effective potential in which the particles are moving. Based on , Kohn–Sham equations of this auxiliary noninteracting system can be derived:
which yields the orbitals that reproduce the density of the original many-body system
The effective single-particle potential can be written as
where is the external potential, the second term is the Hartree term describing the electron–electron Coulomb repulsion, and the last term is the exchange–correlation potential. Here, includes all the many-particle interactions. Since the Hartree term and depend on , which depends on the , which in turn depend on , the problem of solving the Kohn–Sham equation has to be done in a self-consistent (i.e., iterative) way. Usually one starts with an initial guess for , then calculates the corresponding and solves the Kohn–Sham equations for the . From these one calculates a new density and starts again. This procedure is then repeated until convergence is reached. A non-iterative approximate formulation called Harris functional DFT is an alternative approach to this.
| Physical sciences | Atomic physics | Physics |
209876 | https://en.wikipedia.org/wiki/Honeyeater | Honeyeater | The honeyeaters are a large and diverse family, Meliphagidae, of small to medium-sized birds. The family includes the Australian chats, myzomelas, friarbirds, wattlebirds, miners and melidectes. They are most common in Australia and New Guinea, and found also in New Zealand, the Pacific islands as far east as Samoa and Tonga, and the islands to the north and west of New Guinea known as Wallacea. Bali, on the other side of the Wallace Line, has a single species.
In total, there are 186 species in 55 genera, roughly half of them native to Australia, many of the remainder occupying New Guinea. With their closest relatives, the Maluridae (Australian fairy-wrens), Pardalotidae (pardalotes), and Acanthizidae (thornbills, Australian warblers, scrubwrens, etc.), they comprise the superfamily Meliphagoidea and originated early in the evolutionary history of the oscine passerine radiation. Although honeyeaters look and behave very much like other nectar-feeding passerines around the world (such as the sunbirds and flowerpeckers), they are unrelated, and the similarities are the consequence of convergent evolution.
The extent of the evolutionary partnership between honeyeaters and Australasian flowering plants is unknown, but probably substantial. A great many Australian plants are fertilised by honeyeaters, particularly the Proteaceae, Myrtaceae, and Ericaceae. It is known that the honeyeaters are important in New Zealand (see Anthornis) as well, and assumed that the same applies in other areas.
Description
Honeyeaters can be either nectarivorous, insectivorous, frugivorous, or a combination of nectar- and insect-eating. Unlike the hummingbirds of America, honeyeaters do not have extensive adaptations for hovering flight, though smaller members of the family do hover hummingbird-style to collect nectar from time to time. In general, honeyeaters prefer to flit quickly from perch to perch in the outer foliage, stretching up or sideways or hanging upside down at need. Many genera have a highly developed brush-tipped tongue, frayed and fringed with bristles which soak up liquids readily. The tongue is flicked rapidly and repeatedly into a flower, the upper mandible then compressing any liquid out when the bill is closed.
In addition to nectar, all or nearly all honeyeaters take insects and other small creatures, usually by hawking, sometimes by gleaning. A few of the larger species, notably the white-eared honeyeater, and the strong-billed honeyeater of Tasmania, probe under bark for insects and other morsels. Many species supplement their diets with a little fruit, and a small number eat considerable amounts of fruit, particularly in tropical rainforests and, oddly, in semi-arid scrubland. The painted honeyeater is a mistletoe specialist. Most, however, exist on a diet of nectar supplemented by varying quantities of insects. In general, the honeyeaters with long, fine bills are more nectarivorous, the shorter-billed species less so, but even specialised nectar eaters like the spinebills take extra insects to add protein to their diet when breeding.
The movements of honeyeaters are poorly understood. Most are at least partially mobile but many movements seem to be local, possibly between favourite haunts as the conditions change. Fluctuations in local abundance are common, but the small number of definitely migratory honeyeater species aside, the reasons are yet to be discovered. Many follow the flowering of favourite food plants. Arid zone species appear to travel further and less predictably than those of the more fertile areas. It seems probable that no single explanation will emerge.
Taxonomy and systematics
The genera Cleptornis (golden honeyeater) and Apalopteron (Bonin honeyeater), formerly treated in the Meliphagidae, have recently been transferred to the Zosteropidae on genetic evidence. The genus Notiomystis (New Zealand stitchbird), formerly classified in the Meliphagidae, has recently been removed to the newly erected Notiomystidae of which it is the only member. The "Macgregor's bird-of-paradise", historically considered a bird-of-paradise (Paradisaeidae), was recently found to be a honeyeater. It is now known as "MacGregor's honeyeater" and is classified in the Meliphagidae.
The wattled smoky honeyeater (Melipotes carolae), described in 2007, had been discovered in December 2005 in the Foja Mountains of Papua, Indonesia.
In 2008, a study that included molecular phylogenetic analysis of museum specimens in the genera Moho and Chaetoptila, both extinct genera endemic to the Hawaiian islands, argued that these five species were not members of the Meliphagidae and instead belong to their own distinct family, the Mohoidae.
| Biology and health sciences | Corvoidea | null |
209948 | https://en.wikipedia.org/wiki/Rheumatology | Rheumatology | Rheumatology () is a branch of medicine devoted to the diagnosis and management of disorders whose common feature is inflammation in the bones, muscles, joints, and internal organs. Rheumatology covers more than 100 different complex diseases, collectively known as rheumatic diseases, which includes many forms of arthritis as well as lupus and Sjögren's syndrome. Doctors who have undergone formal training in rheumatology are called rheumatologists.
Many of these diseases are now known to be disorders of the immune system, and rheumatology has significant overlap with immunology, the branch of medicine that studies the immune system.
Rheumatologist
A rheumatologist is a physician who specializes in the field of medical sub-specialty called rheumatology. A rheumatologist holds a board certification after specialized training. In the United States, training in this field requires four years undergraduate school, four years of medical school, and then three years of residency, followed by two or three years additional Fellowship training. The requirements may vary in other countries. Rheumatologists are internists who are qualified by additional postgraduate training and experience in the diagnosis and treatment of arthritis and other diseases of the joints, muscles and bones. Many rheumatologists also conduct research to determine the cause and better treatments for these disabling and sometimes fatal diseases. Treatment modalities are based on scientific research, currently, practice of rheumatology is largely evidence based.
Rheumatologists treat arthritis, autoimmune diseases, pain disorders affecting joints, and osteoporosis. There are more than 200 types of these diseases, including rheumatoid arthritis, osteoarthritis, gout, lupus, back pain, osteoporosis, and tendinitis. Some of these are very serious diseases that can be difficult to diagnose and treat. They treat soft tissue problems related to the musculoskeletal system, and sports related soft tissue disorders.
Pediatrics rheumatologist:
A pediatric rheumatologist is a pediatrician who has specialized in the treatment of children with rheumatic disease. Both specialties are important to address a child's milestone development and disease treatment throughout childhood. However, recognition of this sub-specialty has been slow, which has resulted in a global shortage of pediatric rheumatologists, and as a consequence, the demand for healthcare support far exceeds current service capacities. Raising awareness of this is important to attract more upcoming pediatricians into this rewarding area of healthcare.
Diseases
Diseases diagnosed or managed by rheumatologists include:
Degenerative arthropathies
Osteoarthritis
Inflammatory arthropathies
Rheumatoid arthritis
Spondyloarthropathies
Ankylosing spondylitis
Reactive arthritis (reactive arthropathy)
Psoriatic arthropathy
Enteropathic arthropathy
Juvenile Idiopathic Arthritis (JIA)
Crystal arthropathies: gout, pseudogout
Septic arthritis
Raynaud's Disease
Systemic conditions and connective tissue diseases
Lupus
Ehlers-Danlos syndrome
Sjögren's syndrome
Scleroderma (systemic sclerosis)
Polymyositis
Dermatomyositis
Polymyalgia rheumatica
Mixed connective tissue disease
Relapsing polychondritis
Adult-onset Still's disease
Sarcoidosis
Fibromyalgia
Myofascial pain syndrome
Vasculitis
Microscopic polyangiitis
Eosinophilic granulomatosis with polyangiitis
Granulomatosis with polyangiitis
Polyarteritis nodosa
Henoch–Schönlein purpura
Serum sickness
Giant cell arteritis, Temporal arteritis
Takayasu's arteritis
Behçet's disease
Kawasaki disease (mucocutaneous lymph node syndrome)
Thromboangiitis obliterans
Hereditary periodic fever syndromes
Soft tissue rheumatism
Local diseases and lesions affecting the joints and structures around the joints including tendons, ligaments capsules, bursae, stress fractures, muscles, nerve entrapment, vascular lesions, and ganglia. For example:
Low back pain
Tennis elbow
Golfer's elbow
Olecranon bursitis
Diagnosis
Physical examination
Following are examples of methods of diagnosis able to be performed in a normal physical examination.
Schober's test tests the flexion of the lower back.
Multiple joint inspection
Musculoskeletal Examination
Screening Musculoskeletal Exam (SMSE) - a rapid assessment of structure and function
General Musculoskeletal Exam (GMSE) - a comprehensive assessment of joint inflammation
Regional Musculoskeletal Exam (RMSE) - focused assessments of structure, function and inflammation combined with special testing
Specialized
Laboratory tests (e.g. Erythrocyte Sedimentation Rate, Rheumatoid Factor, Anti-CCP (Anti-citrullinated protein antibody), ANA (Anti-Nuclear Antibody) )
X-rays, Ultrasounds, and other imaging methods of affected joints
Cytopathology and chemical pathology of fluid aspirated from affected joints (e.g. to differentiate between septic arthritis and gout)
Treatment
Most rheumatic diseases are treated with analgesics, NSAIDs (nonsteroidal anti-inflammatory drug), steroids (in serious cases), DMARDs (disease-modifying antirheumatic drugs), monoclonal antibodies, such as infliximab and adalimumab, the TNF inhibitor etanercept, and methotrexate for moderate to severe rheumatoid arthritis. The biologic agent rituximab (anti-B cell therapy) is now licensed for use in refractory rheumatoid arthritis.
Physiotherapy is vital in the treatment of many rheumatological disorders. Occupational therapy can help patients find alternative ways for common movements that would otherwise be restricted by their disease. Patients with rheumatoid arthritis often need a long term, coordinated and a multidisciplinary team approach towards management of individual patients. Treatment is often tailored according to the individual needs of each patient which is also dependent on the response and the tolerability of medications.
Beginning in the 2000s, the incorporation of biopharmaceuticals (which include inhibitors of TNF-alpha, certain interleukins, and the JAK-STAT signaling pathway) into standards of care is one of the paramount developments in modern rheumatology.
Rheumasurgery
Rheumasurgery (or rheumatoid surgery) is a subfield of orthopedics occupied with the surgical treatment of patients with rheumatic diseases. The purpose of the interventions is to limit disease activity, soothe pain and improve function.
Rheumasurgical interventions can be divided in two groups. The one is early synovectomies, that is the removal of the inflamed synovia in order to prevent spreading and stop destruction. The other group is the so-called corrective intervention, i.e. an intervention done after destruction has taken place. Among the corrective interventions are joint replacements, removal of loose bone or cartilage fragments, and a variety of interventions aimed at repositioning and/or stabilizing joints, such as arthrodesis.
Research directions
Recently, a large body of scientific research deals with the background of autoimmune disease, the cause of many rheumatic disorders. Also, the field of osteoimmunology has emerged to further examine the interactions between the immune system, joints, and bones. Epidemiological studies and medication trials are also being conducted. The Rheumatology Research Foundation is the largest private funding source of rheumatology research and training in the United States.
History
Rheum surgery emerged in the cooperation of rheumatologists and orthopedic surgeons in Heinola, Finland, during the 1950s.
In 1970 a Norwegian investigation estimated that at least 50% of patients with rheumatic symptoms needed rheumasurgery as an integrated part of their treatment.
The European Rheumatoid Arthritis Surgical Society (ERASS) was founded in 1979.
Around the turn of the 21st century, focus for treatment of patients with rheumatic disease shifted, and pharmacological treatment became dominant, while surgical interventions became rarer.
| Biology and health sciences | Fields of medicine | Health |
209959 | https://en.wikipedia.org/wiki/Sodium%20hypochlorite | Sodium hypochlorite | Sodium hypochlorite is an alkaline inorganic chemical compound with the formula (also written as NaClO). It is commonly known in a dilute aqueous solution as bleach or chlorine bleach. It is the sodium salt of hypochlorous acid, consisting of sodium cations () and hypochlorite anions (, also written as and ).
The anhydrous compound is unstable and may decompose explosively. It can be crystallized as a pentahydrate , a pale greenish-yellow solid which is not explosive and is stable if kept refrigerated.
Sodium hypochlorite is most often encountered as a pale greenish-yellow dilute solution referred to as chlorine bleach, which is a household chemical widely used (since the 18th century) as a disinfectant and bleaching agent. In solution, the compound is unstable and easily decomposes, liberating chlorine, which is the active principle of such products. Sodium hypochlorite is still the most important chlorine-based bleach.
Its corrosive properties, common availability, and reaction products make it a significant safety risk. In particular, mixing liquid bleach with other cleaning products, such as acids found in limescale-removing products, will release chlorine gas. Chlorine gas was utilized as a chemical weapon in World War I. A common misconception is that mixing bleach with ammonia also releases chlorine, but in reality they react to produce chloramines such as nitrogen trichloride. With excess ammonia and sodium hydroxide, hydrazine may be generated.
Chemistry
Stability of the solid
Anhydrous sodium hypochlorite can be prepared but, like many hypochlorites, it is highly unstable and decomposes explosively on heating or friction. The decomposition is accelerated by carbon dioxide at Earth's atmospheric levels - around 4 parts per ten thousand. It is a white solid with the orthorhombic crystal structure.
Sodium hypochlorite can also be obtained as a crystalline pentahydrate , which is not explosive and is much more stable than the anhydrous compound. The formula is sometimes given in its hydrous crystalline form as . The Cl–O bond length in the pentahydrate is 1.686 Å. The transparent, light greenish-yellow, orthorhombic crystals contain 44% NaOCl by weight and melt at 25–27 °C. The compound decomposes rapidly at room temperature, so it must be kept under refrigeration. At lower temperatures, however, it is quite stable: reportedly only 1% decomposition after 360 days at 7 °C.
A 1966 US patent claims that stable solid sodium hypochlorite dihydrate can be obtained by carefully excluding chloride ions (), which are present in the output of common manufacturing processes and are said to catalyze the decomposition of hypochlorite into chlorate () and chloride. In one test, the dihydrate was claimed to show only 6% decomposition after 13.5 months of storage at −25 °C. The patent also claims that the dihydrate can be reduced to the anhydrous form by vacuum drying at about 50 °C, yielding a solid that showed no decomposition after 64 hours at −25 °C.
Equilibria and stability of solutions
At typical ambient temperatures, sodium hypochlorite is more stable in dilute solutions that contain solvated and ions. The density of the solution is 1.093 g/mL at 5% concentration, and 1.21 g/mL at 14%, 20 °C. Stoichiometric solutions are fairly alkaline, with pH 11 or higher since hypochlorous acid is a weak acid:
The following species and equilibria are present in NaOCl/NaCl solutions:
The second equilibrium equation above will be shifted to the right if the chlorine is allowed to escape as gas. The ratios of , HOCl, and in solution are also pH dependent. At pH below 2, the majority of the chlorine in the solution is in the form of dissolved elemental . At pH greater than 7.4, the majority is in the form of hypochlorite . The equilibrium can be shifted by adding acids (such as hydrochloric acid) or bases (such as sodium hydroxide) to the solution:
At a pH of about 4, such as obtained by the addition of strong acids like hydrochloric acid, the amount of undissociated (nonionized) HOCl is highest. The reaction can be written as:
Sodium hypochlorite solutions combined with acid evolve chlorine gas, particularly strongly at pH < 2, by the reactions:
At pH > 8, the chlorine is practically all in the form of hypochlorite anions (). The solutions are fairly stable at pH 11–12. Even so, one report claims that a conventional 13.6% NaOCl reagent solution lost 17% of its strength after being stored for 360 days at 7 °C. For this reason, in some applications one may use more stable chlorine-releasing compounds, such as calcium hypochlorite or trichloroisocyanuric acid .
Anhydrous sodium hypochlorite is soluble in methanol, and solutions are stable.
Decomposition to chlorate or oxygen
In solution, under certain conditions, the hypochlorite anion may also disproportionate (autoxidize) to chloride and chlorate:
In particular, this reaction occurs in sodium hypochlorite solutions at high temperatures, forming sodium chlorate and sodium chloride:
This reaction is exploited in the industrial production of sodium chlorate.
An alternative decomposition of hypochlorite produces oxygen instead:
In hot sodium hypochlorite solutions, this reaction competes with chlorate formation, yielding sodium chloride and oxygen gas:
These two decomposition reactions of solutions are maximized at pH around 6. For example, at 80 °C, with and concentrations of 80 mM, over the pH range 5−10.5, both reactions have rate proportional to , decomposition is fastest at pH 6.5, and chlorate is produced with ~95% efficiency. Above pH 11, both reactions have rate proportional to , decomposition is much slower, and chlorate is produced with ~90% efficiency. This decomposition is affected by light and metal ion catalysts such as copper, nickel, cobalt, and iridium. Catalysts like sodium dichromate and sodium molybdate may be added industrially to reduce the oxygen pathway, but a report claims that only the latter is effective.
Titration
Titration of hypochlorite solutions is often done by adding a measured sample to an excess amount of acidified solution of potassium iodide (KI) and then titrating the liberated iodine () with a standard solution of sodium thiosulfate or phenylarsine oxide, using starch as indicator, until the blue color disappears.
According to one US patent, the stability of sodium hypochlorite content of solids or solutions can be determined by monitoring the infrared absorption due to the O–Cl bond. The characteristic wavelength is given as 140.25 μm for water solutions, 140.05 μm for the solid dihydrate , and 139.08 μm for the anhydrous mixed salt .
Oxidation of organic compounds
Oxidation of starch by sodium hypochlorite, which adds carbonyl and carboxyl groups, is relevant to the production of modified starch products.
In the presence of a phase-transfer catalyst, alcohols are oxidized to the corresponding carbonyl compound (aldehyde or ketone). Sodium hypochlorite can also oxidize organic sulfides to sulfoxides or sulfones; disulfides or thiols to sulfonyl halides; and imines to oxaziridines. It can also de-aromatize phenols.
Oxidation of metals and complexes
Heterogeneous reactions of sodium hypochlorite and metals such as zinc proceed slowly to give the metal oxide or hydroxide:
NaOCl + Zn → ZnO + NaCl
Homogeneous reactions with metal coordination complexes proceed somewhat faster. This has been exploited in the Jacobsen epoxidation.
Other reactions
If not properly stored in airtight containers, sodium hypochlorite reacts with carbon dioxide to form sodium carbonate:
Sodium hypochlorite reacts with most nitrogen compounds to form volatile monochloramine, dichloramines, and nitrogen trichloride:
Neutralization
Sodium thiosulfate is an effective chlorine neutralizer. Rinsing with a 5 mg/L solution, followed by washing with soap and water, will remove chlorine odor from the hands.
Production
Chlorination of soda
Potassium hypochlorite was first produced in 1789 by Claude Louis Berthollet in his laboratory on the Quai de Javel in Paris, France, by passing chlorine gas through a solution of potash lye. The resulting liquid, known as "Eau de Javel" ("Javel water"), was a weak solution of potassium hypochlorite. Antoine Labarraque replaced potash lye by the cheaper soda lye, thus obtaining sodium hypochlorite (Eau de Labarraque).
Hence, chlorine is simultaneously reduced and oxidized; this process is known as disproportionation.
The process is also used to prepare the pentahydrate for industrial and laboratory use. In a typical process, chlorine gas is added to a 45–48% NaOH solution. Some of the sodium chloride precipitates and is removed by filtration, and the pentahydrate is then obtained by cooling the filtrate to 12 °C.
From calcium hypochlorite
Another method involved the reaction of sodium carbonate ("washing soda") with chlorinated lime ("bleaching powder"), a mixture of calcium hypochlorite , calcium chloride , and calcium hydroxide :
This method was commonly used to produce hypochlorite solutions for use as a hospital antiseptic that was sold after World War I under the names "Eusol", an abbreviation for Edinburgh University Solution Of (chlorinated) Lime – a reference to the university's pathology department, where it was developed.
Electrolysis of brine
Near the end of the nineteenth century, E. S. Smith patented the chloralkali process: a method of producing sodium hypochlorite involving the electrolysis of brine to produce sodium hydroxide and chlorine gas, which then mixed to form sodium hypochlorite. The key reactions are:
(at the anode)
(at the cathode)
Both electric power and brine solutions were in cheap supply at the time, and various enterprising marketers took advantage of the situation to satisfy the market's demand for sodium hypochlorite. Bottled solutions of sodium hypochlorite were sold under numerous trade names.
Today, an improved version of this method, known as the Hooker process (named after Hooker Chemicals, acquired by Occidental Petroleum), is the only large-scale industrial method of sodium hypochlorite production. In the process, sodium hypochlorite (NaClO) and sodium chloride (NaCl) are formed when chlorine is passed into a cold dilute sodium hydroxide solution. The chlorine is prepared industrially by electrolysis with minimal separation between the anode and the cathode. The solution must be kept below 40 °C (by cooling coils) to prevent the undesired formation of sodium chlorate.
Commercial solutions always contain significant amounts of sodium chloride (common salt) as the main by-product, as seen in the equation above.
From hypochlorous acid and soda
A 1966 patent describes the production of solid stable dihydrate by reacting a chloride-free solution of hypochlorous acid HClO (such as prepared from chlorine monoxide ClO and water), with a concentrated solution of sodium hydroxide. In a typical preparation, 255 mL of a solution with 118 g/L HClO is slowly added with stirring to a solution of 40 g of NaOH in water 0 °C. Some sodium chloride precipitates and is removed by filtration. The solution is vacuum evaporated at 40–50 °C and 1–2 mmHg until the dihydrate crystallizes out. The crystals are vacuum-dried to produce a free-flowing crystalline powder.
The same principle was used in a 1993 patent to produce concentrated slurries of the pentahydrate . Typically, a 35% solution (by weight) of HClO is combined with sodium hydroxide at about or below 25 °C. The resulting slurry contains about 35% NaClO, and are relatively stable due to the low concentration of chloride.
Packaging and sale
Household bleach sold for use in laundering clothes is a 3–8% solution of sodium hypochlorite at the time of manufacture. Strength varies from one formulation to another and gradually decreases with long storage. Sodium hydroxide is usually added in small amounts to household bleach to slow down the decomposition of NaClO.
Domestic use patio blackspot remover products are ~10% solutions of sodium hypochlorite.
A 10–25% solution of sodium hypochlorite is, according to Univar's safety sheet, supplied with synonyms or trade names bleach, Hypo, Everchlor, Chloros, Hispec, Bridos, Bleacol, or Vo-redox 9110.
A 12% solution is widely used in waterworks for the chlorination of water, and a 15% solution is more commonly used for disinfection of wastewater in treatment plants. Sodium hypochlorite can also be used for point-of-use disinfection of drinking water, taking 0.2–2 mg of sodium hypochlorite per liter of water.
Dilute solutions (50 ppm to 1.5%) are found in disinfecting sprays and wipes used on hard surfaces.
Uses
Bleaching
Household bleach is, in general, a solution containing 3–8% sodium hypochlorite, by weight, and 0.01–0.05% sodium hydroxide; the sodium hydroxide is used to slow the decomposition of sodium hypochlorite into sodium chloride and sodium chlorate.
Cleaning
Sodium hypochlorite has destaining properties. Among other applications, it can be used to remove mold stains, dental stains caused by fluorosis, and stains on crockery, especially those caused by the tannins in tea. It has also been used in laundry detergents and as a surface cleaner. It is also used in sodium hypochlorite washes.
Its bleaching, cleaning, deodorizing, and caustic effects are due to oxidation and hydrolysis (saponification). Organic dirt exposed to hypochlorite becomes water-soluble and non-volatile, which reduces its odor and facilitates its removal.
Disinfection
Sodium hypochlorite in solution exhibits broad-spectrum anti-microbial activity and is widely used in healthcare facilities in a variety of settings. It is usually diluted in water depending on its intended use. "Strong chlorine solution" is a 0.5% solution of hypochlorite (containing approximately 5000 ppm free chlorine) used for disinfecting areas contaminated with body fluids, including large blood spills (the area is first cleaned with detergent before being disinfected). It may be made by diluting household bleach as appropriate (normally 1 part bleach to 9 parts water). Such solutions have been demonstrated to inactivate both C. difficile and HPV. "Weak chlorine solution" is a 0.05% solution of hypochlorite used for washing hands, but is normally prepared with calcium hypochlorite granules.
"Dakin's Solution" is a disinfectant solution containing a low concentration of sodium hypochlorite and some boric acid or sodium bicarbonate to stabilize the pH. It is effective with NaOCl concentrations as low as 0.025%.
US government regulations allow food processing equipment and food contact surfaces to be sanitized with solutions containing bleach, provided that the solution is allowed to drain adequately before contact with food and that the solutions do not exceed 200 parts per million (ppm) available chlorine (for example, one tablespoon of typical household bleach containing 5.25% sodium hypochlorite, per gallon of water). If higher concentrations are used, the surface must be rinsed with potable water after sanitizing.
A similar concentration of bleach in warm water is used to sanitize surfaces before brewing beer or wine. Surfaces must be rinsed with sterilized (boiled) water to avoid imparting flavors to the brew; the chlorinated byproducts of sanitizing surfaces are also harmful. The mode of disinfectant action of sodium hypochlorite is similar to that of hypochlorous acid.
Solutions containing more than 500 ppm available chlorine are corrosive to some metals, alloys, and many thermoplastics (such as acetal resin) and need to be thoroughly removed afterward, so the bleach disinfection is sometimes followed by an ethanol disinfection. Liquids containing sodium hypochlorite as the main active component are also used for household cleaning and disinfection, for example toilet cleaners. Some cleaners are formulated to be viscous so as not to drain quickly from vertical surfaces, such as the inside of a toilet bowl.
The undissociated (nonionized) hypochlorous acid is believed to react with and inactivate bacterial and viral enzymes.
Neutrophils of the human immune system produce small amounts of hypochlorite inside phagosomes, which digest bacteria and viruses.
Deodorizing
Sodium hypochlorite has deodorizing properties, which go hand in hand with its cleaning properties.
Waste water treatment
Sodium hypochlorite solutions have been used to treat dilute cyanide wastewater, such as electroplating wastes. In batch treatment operations, sodium hypochlorite has been used to treat more concentrated cyanide wastes, such as silver cyanide plating solutions. Toxic cyanide is oxidized to cyanate ) that is not toxic, idealized as follows:
Sodium hypochlorite is commonly used as a biocide in industrial applications to control slime and bacteria formation in water systems used at power plants, pulp and paper mills, etc., in solutions typically of 10–15% by weight.
Endodontics
Sodium hypochlorite is the medicament of choice due to its efficacy against pathogenic organisms and pulp digestion in endodontic therapy. Its concentration for use varies from 0.5% to 5.25%. At low concentrations it dissolves mainly necrotic tissue; at higher concentrations, it also dissolves vital tissue and additional bacterial species. One study has shown that Enterococcus faecalis was still present in the dentin after 40 minutes of exposure of 1.3% and 2.5% sodium hypochlorite, whereas 40 minutes at a concentration of 5.25% was effective in E. faecalis removal. In addition to higher concentrations of sodium hypochlorite, longer time exposure and warming the solution (60 °C) also increases its effectiveness in removing soft tissue and bacteria within the root canal chamber. 2% is a common concentration as there is less risk of an iatrogenic hypochlorite incident. A hypochlorite incident is an immediate reaction of severe pain, followed by edema, haematoma, and ecchymosis as a consequence of the solution escaping the confines of the tooth and entering the periapical space. This may be caused by binding or excessive pressure on the irrigant syringe, or it may occur if the tooth has an unusually large apical foramen.
Nerve agent neutralization
At the various nerve agent (chemical warfare nerve gas) destruction facilities throughout the United States, 0.5-2.5% sodium hypochlorite is used to remove all traces of nerve agent or blister agent from Personal Protection Equipment after an entry is made by personnel into toxic areas.
0.5-2.5% sodium hypochlorite is also used to neutralize any accidental releases of the nerve agent in the toxic areas.
Lesser concentrations of sodium hypochlorite are used similarly in the Pollution Abatement System to ensure that no nerve agent is released into the furnace flue gas.
Reduction of skin damage
Dilute bleach baths have been used for decades to treat moderate to severe eczema in humans,. Still, it has not been clear why they work. One of the reasons why bleach helps is that eczema can frequently result in secondary infections, especially from bacteria like Staphylococcus aureus, which makes managing it difficult. Staphylococcus aureus infection is related to the pathogenesis of eczema and AD. Bleach baths are one method for lowering the risk of staph infections in people with eczema. The antibacterial and anti-inflammatory properties of sodium hypochlorite contribute to the reduction of harmful bacteria on the skin and the reduction of inflammation, respectively. According to work published by researchers at the Stanford University School of Medicine in November 2013, a very dilute (0.005%) solution of sodium hypochlorite in water was successful in treating skin damage with an inflammatory component caused by radiation therapy, excess sun exposure or aging in laboratory mice. Mice with radiation dermatitis given daily 30-minute baths in bleach solution experienced less severe skin damage and better healing and hair regrowth than animals bathed in water. A molecule called nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) is known to play a critical role in inflammation, aging, and response to radiation. The researchers found that if NF-κB activity was blocked in elderly mice by bathing them in bleach solution, the animals' skin began to look younger, going from old and fragile to thicker, with increased cell proliferation. The effect diminished after the baths were stopped, indicating that regular exposure was necessary to maintain skin thickness.
Safety
Dilute sodium hypochlorite solutions (as in household bleach) are irritating to mainly the skin and respiratory tract. Short-term skin contact with household bleach may cause dryness of the skin.
It is estimated that there are about 3,300 accidents needing hospital treatment caused by sodium hypochlorite solutions each year in British homes (RoSPA, 2002).
Oxidation and corrosion
Sodium hypochlorite is a strong oxidizer. Oxidation reactions are corrosive. Solutions burn the skin and cause eye damage, especially when used in concentrated forms. As recognized by the NFPA, however, only solutions containing more than 40% sodium hypochlorite by weight are considered hazardous oxidizers. Solutions less than 40% are classified as a moderate oxidizing hazard (NFPA 430, 2000).
Household bleach and pool chlorinator solutions are typically stabilized by a significant concentration of lye (caustic soda, NaOH) as part of the manufacturing reaction. This additive will by itself cause caustic irritation or burns due to defatting and saponification of skin oils and destruction of tissue. The slippery feel of bleach on the skin is due to this process.
Storage hazards
Contact of sodium hypochlorite solutions with metals may evolve flammable hydrogen gas. Containers may explode when heated due to the release of chlorine gas.
Hypochlorite solutions are corrosive to common container materials such as stainless steel and aluminium. The few compatible metals include titanium (which however is not compatible with dry chlorine) and tantalum. Glass containers are safe. Some plastics and rubbers are affected too; safe choices include polyethylene (PE), high density polyethylene (HDPE, PE-HD), polypropylene (PP), some chlorinated and fluorinated polymers such as polyvinyl chloride (PVC), polytetrafluoroethylene (PTFE), and polyvinylidene fluoride (PVDF); as well as ethylene propylene rubber, and Viton.
Containers must allow the venting of oxygen produced by decomposition over time, otherwise, they may burst.
Reactions with other common products
Mixing bleach with some household cleaners can be hazardous.
Sodium hypochlorite solutions, such as liquid bleach, will release toxic chlorine gas when mixed with an acid, such as hydrochloric acid or vinegar.
A 2008 study indicated that sodium hypochlorite and organic chemicals (e.g., surfactants, fragrances) contained in several household cleaning products can react to generate chlorinated organic compounds. The study showed that indoor air concentrations significantly increase (8–52 times for chloroform and 1–1170 times for carbon tetrachloride, respectively, above baseline quantities in the household) during the use of bleach containing products.
In particular, mixing hypochlorite bleaches with amines (for example, cleaning products that contain or release ammonia, ammonium salts, urea, or related compounds and biological materials such as urine) produces chloramines. These gaseous products can cause acute lung injury. Chronic exposure, for example, from the air at swimming pools where chlorine is used as the disinfectant, can lead to the development of atopic asthma.
Bleach can react violently with hydrogen peroxide and produce oxygen gas:
Explosive reactions or byproducts can also occur in industrial and laboratory settings when sodium hypochlorite is mixed with diverse organic compounds.
Limitations in health care
The UK's National Institute for Health and Care Excellence in October 2008 recommended that Dakin's solution should not be used in routine wound care.
Environmental impact
In spite of its strong biocidal action, sodium hypochlorite per se has limited environmental impact, since the hypochlorite ion rapidly degrades before it can be absorbed by living beings.
However, one major concern arising from sodium hypochlorite use is that it tends to form persistent chlorinated organic compounds, including known carcinogens, that can be absorbed by organisms and enter the food chain. These compounds may be formed during household storage and use as well as during industrial use. For example, when household bleach and wastewater were mixed, 1–2% of the available chlorine was observed to form organic compounds. As of 1994, not all the byproducts had been identified, but identified compounds include chloroform and carbon tetrachloride. The exposure to these chemicals from use is estimated to be within occupational exposure limits.
| Physical sciences | Halide oxyanions | Chemistry |
209960 | https://en.wikipedia.org/wiki/Auriga | Auriga | Auriga is a constellation in the northern celestial hemisphere. It is one of the 88 modern constellations; it was among the 48 constellations listed by the 2nd-century astronomer Ptolemy. Its name is Latin for '(the) charioteer', associating it with various mythological beings, including Erichthonius and Myrtilus. Auriga is most prominent during winter evenings in the northern Hemisphere, as are five other constellations that have stars in the Winter Hexagon asterism. Because of its northern declination, Auriga is only visible in its entirety as far south as −34°; for observers farther south it lies partially or fully below the horizon. A large constellation, with an area of 657 square degrees, it is half the size of the largest, Hydra.
Its brightest star, Capella, is an unusual multiple star system among the brightest stars in the night sky. Beta Aurigae is an interesting variable star in the constellation; Epsilon Aurigae, a nearby eclipsing binary with an unusually long period, has been studied intensively. Because of its position near the winter Milky Way, Auriga has many bright open clusters in its borders, including M36, M37, and M38, popular targets for amateur astronomers. In addition, it has one prominent nebula, the Flaming Star Nebula, associated with the variable star AE Aurigae.
In Chinese mythology, Auriga's stars were incorporated into several constellations, including the celestial emperors' chariots, made up of the modern constellation's brightest stars. Auriga is home to the radiant for the Aurigids, Zeta Aurigids, Delta Aurigids, and the hypothesized Iota Aurigids.
History and mythology
The first record of Auriga's stars was in Mesopotamia as a constellation called GAM, representing a scimitar or crook. However, this may have represented just Capella (Alpha Aurigae) or the modern constellation as a whole; this figure was alternatively called Gamlum or MUL.GAM in the MUL.APIN. The crook of Auriga stood for a goat-herd or shepherd. It was formed from most of the stars of the modern constellation; all of the bright stars were included except for Elnath, traditionally assigned to both Taurus and Auriga. Later, Bedouin astronomers created constellations that were groups of animals, where each star represented one animal. The stars of Auriga comprised a herd of goats, an association also present in Greek mythology. The association with goats carried into the Greek astronomical tradition, though it later became associated with a charioteer along with the shepherd.
In Greek mythology, Auriga is often identified as the mythological Greek hero Erichthonius of Athens, the chthonic son of Hephaestus who was raised by the goddess Athena. Erichthonius was generally credited to be the inventor of the quadriga, the four-horse chariot, which he used in the battle against the usurper Amphictyon, the event that made Erichthonius the king of Athens. His chariot was created in the image of the Sun's chariot, the reason Zeus placed him in the heavens. The Athenian hero then dedicated himself to Athena and, soon after, Zeus raised him into the night sky in honor of his ingenuity and heroic deeds.
Auriga, however, is sometimes described as Myrtilus, who was Hermes's son and the charioteer of Oenomaus. The association of Auriga and Myrtilus is supported by depictions of the constellation, which rarely show a chariot. Myrtilus's chariot was destroyed in a race intended for suitors to win the heart of Oenomaus's daughter Hippodamia. Myrtilus earned his position in the sky when Hippodamia's successful suitor, Pelops, killed him, despite his complicity in helping Pelops win her hand. After his death, Myrtilus's father Hermes placed him in the sky. Yet another mythological association of Auriga is Theseus's son Hippolytus. He was ejected from Athens after he refused the romantic advances of his stepmother Phaedra, who committed suicide as a result. He was killed when his chariot was wrecked, but revived by Asclepius.
Auriga is also said to represent Phaethon, son of the sun Helios, who tricked his father into letting him drive his chariot for a day. Phaethon crashed and burned, scorching the earth. He was then placed in the night sky as the Auriga. Regardless of Auriga's specific representation, it is likely that the constellation was created by the ancient Greeks to commemorate the importance of the chariot in their society.
An incidental appearance of Auriga in Greek mythology is as the limbs of Medea's brother. In the myth of Jason and the Argonauts, as they journeyed home, Medea killed her brother and dismembered him, flinging the parts of his body into the sea, represented by the Milky Way. Each individual star represents a different limb.
Capella is associated with the mythological she-goat Amalthea, who breast-fed the infant Zeus. It forms an asterism with the stars Epsilon Aurigae, Zeta Aurigae, and Eta Aurigae, the latter two of which are known as the Haedi (the Kids). Though most often associated with Amalthea, Capella has sometimes been associated with Amalthea's owner, a nymph. The myth of the nymph says that the goat's hideous appearance, resembling a Gorgon, was partially responsible for the Titans' defeat, because Zeus skinned the goat and wore it as his aegis. The asterism containing the goat and kids had been a separate constellation; however, Ptolemy merged the Charioteer and the Goats in the 2nd-century Almagest. Before that, Capella was sometimes seen as its own constellation—by Pliny the Elder and Manilius—called Capra, Caper, or Hircus, all of which relate to its status as the "goat star". Zeta Aurigae and Eta Aurigae were first called the "Kids" by Cleostratus, an ancient Greek astronomer.
Traditionally, illustrations of Auriga represent it as a chariot and its driver. The charioteer holds a goat over his left shoulder and has two kids under his left arm; he holds the reins to the chariot in his right hand. However, depictions of Auriga have been inconsistent over the years. The reins in his right hand have also been drawn as a whip, though Capella is almost always over his left shoulder and the Kids under his left arm. The 1488 atlas Hyginus deviated from this typical depiction by showing a four-wheeled cart driven by Auriga, who holds the reins of two oxen, a horse, and a zebra. Jacob Micyllus depicted Auriga in his Hyginus of 1535 as a charioteer with a two-wheeled cart, powered by two horses and two oxen. Arabic and Turkish depictions of Auriga varied wildly from those of the European Renaissance; one Turkish atlas depicted the stars of Auriga as a mule, called Mulus clitellatus by Johann Bayer. One unusual representation of Auriga, from 17th-century France, showed Auriga as Adam kneeling on the Milky Way, with a goat wrapped around his shoulders.
Occasionally, Auriga is seen not as the Charioteer but as Bellerophon, the mortal rider of Pegasus who dared to approach Mount Olympus. In this version of the tale, Jupiter pitied Bellerophon for his foolishness and placed him in the stars.
Oxford research finds it likely the group was equally named Agitator in about the 15th century and provides a quotation as late as 1623, from a Gerard de Malynes multi-topic work. Some of the stars of Auriga were incorporated into a now-defunct constellation called Telescopium Herschelii. This constellation was introduced by Maximilian Hell to honor William Herschel's discovery of Uranus. Originally, it included two constellations, Tubus Hershelii Major , in Gemini, Lynx, and Auriga, and Tubus Hershelii Minor in Orion and Taurus; both represented Herschel's telescopes. Johann Bode combined Hell's constellations into Telescopium Herschelii in 1801, located mostly in Auriga.
Since the time of Ptolemy, Auriga has remained a constellation and is officially recognized by the International Astronomical Union, although like all modern constellations, it is now defined as a specific region of the sky that includes both the ancient pattern and the surrounding stars. In 1922, the IAU designated its recommended three-letter abbreviation, "Aur". The official boundaries of Auriga were created in 1930 by Belgian astronomer Eugène Delporte as a polygon of 20 segments. Its right ascension is between 4h 37.5m and 7h 30.5m and its declination is between 27.9° and 56.2° in the equatorial coordinate system.
In non-Western astronomy
The stars of Auriga were incorporated into several Chinese constellations. Wuche, the five chariots of the celestial emperors and the representation of the grain harvest, was a constellation formed by Alpha Aurigae, Beta Aurigae, Beta Tauri, Theta Aurigae, and Iota Aurigae. Sanzhu or Zhu was one of three constellations which represented poles for horses to be tethered. They were formed by the triplets of Epsilon, Zeta, and Eta Aurigae; Nu, Tau, and Upsilon Aurigae; and Chi and 26 Aurigae, with one other undetermined star. Xianchi, the pond where the sun set and Tianhuang, a pond, bridge, or pier, were other constellations in Auriga, though the stars that composed them are undetermined. Zuoqi, representing chairs for the emperor and other officials, was made up of nine stars in the east of the constellation. Bagu, a constellation mostly formed from stars in Camelopardalis representing different types of crops, included the northern stars of Delta and Xi Aurigae.
In ancient Hindu astronomy, Capella represented the heart of Brahma and was important religiously. Ancient Peruvian peoples saw Capella, called Colca, as a star intimately connected to the affairs of shepherds.
In Brazil, the Bororo people incorporate the stars of Auriga into a massive constellation representing a caiman; its southern stars represent the end of the animal's tail. The eastern portion of Taurus is the rest of the tail, while Orion is its body and Lepus is the head. This constellation arose because of the prominence of caymans in daily Amazonian life. There is evidence that Capella was significant to the Aztec people, as the Late Classic site Monte Albán has a marker for the star's heliacal rising. Indigenous peoples of California and Nevada also noticed the bright pattern of Auriga's stars. To them, the constellation's bright stars formed a curve that was represented in crescent-shaped petroglyphs. The indigenous Pawnee of North America recognized a constellation with the same major stars as modern Auriga: Alpha, Beta, Gamma (Beta Tauri), Theta, and Iota Aurigae.
The people of the Marshall Islands featured Auriga in the myth of Dümur, which tells the story of the creation of the sky. Antares in Scorpius represents Dümur, the oldest son of the stars' mother, and the Pleiades represent her youngest son. The mother of the stars, Ligedaner, is represented by Capella; she lived on the island of Alinablab. She told her sons that the first to reach an eastern island would become the King of the Stars, and asked Dümur to let her come in his canoe. He refused, as did each of her sons in turn, except for Pleiades. Pleiades won the race with the help of Ligedaner, and became the King of the Stars. Elsewhere in the central Caroline Islands, Capella was called Jefegen uun (variations include efang alul, evang-el-ul, and iefangel uul), meaning "north of Aldebaran". Different names were noted for Auriga and Capella in Eastern Pacific societies. On Pukapuka, the figure of modern Auriga was called Te Wale-o-Tutakaiolo ("The house of Tutakaiolo"); in the Society Islands, it was called Faa-nui ("Great Valley"). Capella itself was called Tahi-anii ("Unique Sovereign") in the Societies. Hoku-lei was the name for Capella but may have been the name for the whole constellation; the name means "Star-wreath" and refers to one of the wives of the Pleiades, called Makalii.
The stars of Auriga feature in Inuit constellations. Quturjuuk, meaning "collar-bones", was a constellation that included Capella (Alpha Aurigae), Menkalinan (Beta Aurigae), Pollux (Beta Geminorum), and Castor (Alpha Geminorum). Its rising signalled that the constellation Aagjuuk, made up of Altair (Alpha Aquilae), Tarazed (Gamma Aquilae), and sometimes Alshain (Beta Aquilae), would rise soon. Aagjuuk, which represented the dawn following the winter solstice, was an incredibly important constellation in the Inuit mythos. It was also used for navigation and time-keeping at night.
Features
Stars
Bright stars
Alpha Aurigae (Capella), the brightest star in Auriga, is a G8III class star (G-type giant) 43 light-years away and the sixth-brightest star in the night sky at magnitude 0.08. Its traditional name is a reference to its mythological position as Amalthea; it is sometimes called the "Goat Star". Capella's names all point to this mythology. In Arabic, Capella was called al-'Ayyuq, meaning "the goat", and in Sumerian, it was called mul.ÁŠ.KAR, "the goat star". On Ontong Java, Capella was called ngahalapolu. Capella is a spectroscopic binary with a period of 104 days; the components are both yellow giants, more specifically, the primary is a G-type star and the secondary is between a G-type and F-type star in its evolution. The secondary is formally classified as a G0III class star (G-type giant). The primary has a radius of 11.87 solar radii () and a mass of 2.47 solar masses (); the secondary has a radius of and a mass of . The two components are separated by 110 million kilometers, almost 75% of the distance between the Earth and the Sun. The star's status as a binary was discovered in 1899 at the Lick Observatory; its period was determined in 1919 by J.A. Anderson at the 100-inch Mt. Wilson Observatory telescope. It appears with a golden-yellow hue, though Ptolemy and Giovanni Battista Riccioli both described its color as red, a phenomenon attributed not to a change in Capella's color but to the idiosyncrasies of their color sensitivities. Capella has an absolute magnitude of 0.3 and a luminosity of 160 times the luminosity of the Sun, or (the primary is and the secondary is ). It may be loosely associated with the Hyades, an open cluster in Taurus, because of their similar proper motion. Capella has one more companion, Capella H, which is a pair of red dwarf stars located 11,000 astronomical units (0.17 light-years) from the main pair.
Beta Aurigae (Menkalinan, Menkarlina) is a bright A2IV class star (A-type subgiant). Its Arabic name comes from the phrase mankib dhu al-'inan, meaning "shoulder of the charioteer" and is a reference to Beta Aurigae's location in the constellation. Menkalinan is 81 light-years away and has a magnitude of 1.90. Like Epsilon Aurigae, it is an eclipsing binary star that varies in magnitude by 0.1m. The two components are blue-white stars that have a period of 3.96 days. Its double nature was revealed spectroscopically in 1890 by Antonia Maury, making it the second spectroscopic binary discovered, and its variable nature was discovered photometrically 20 years later by Joel Stebbins. Menkalinan has an absolute magnitude of 0.6 and a luminosity of . The component of its motion in the direction of Earth is per second. Beta Aurigae may be associated with a stream of about 70 stars including Delta Leonis and Alpha Ophiuchi; the proper motion of this group is comparable to that of the Ursa Major Moving Group, though the connection is only hypothesized. Besides its close eclipsing companion, Menkalinan has two other stars associated with it. One is an unrelated optical companion, discovered in 1783 by William Herschel; it has a magnitude of 10.5 and has a separation of 184 arcseconds. The other is likely associated gravitationally with the primary, as determined by their common proper motion. This 14th-magnitude star was discovered in 1901 by Edward Emerson Barnard. It has a separation of 12.6 arcseconds, and is around 350 astronomical units from the primary.
Other bright stars
Besides particularly bright stars of Alpha and Beta Aurigae, Auriga has many dimmer naked-eye visible stars.
Gamma Aurigae, now known under its once co-name Beta Tauri (El Nath, Alnath) is a B7III class star (B-type giant). At about +1.65 it would rank a clear third in apparent magnitude if still co-placed in Auriga. It is a mercury-manganese star, with some large signatures of heavy elements.
Iota Aurigae, also called Hasseleh and Kabdhilinan, is a K3II class star (K-type bright giant) of magnitude 2.69; it is about 494 light-years away from Earth. It evolved from a B-type star to K-type over the estimated 30–45 million years since its birth. It has an absolute magnitude of −2.3 and a luminosity of . It is classed as a particularly luminous bright giant but its light is in part "extinguished" (blocked) by intra-galactic dust clouds — astronomers estimate by these it appears 0.6 magnitudes fainter. It is also a hybrid star, an x-ray producing giant star that emits x-rays from its corona and has a cool stellar wind. Though its proper motion is just 0.02 arcseconds per year, it has a radial velocity of per second in recession. The traditional name Kabdhilinan, sometimes shortened to "Alkab", comes from the Arabic phrase al-kab dh'il inan, meaning "shoulder of the rein holder". Iota may end as a supernova, but because it is close to the mass limit for such stars, it may instead become a white dwarf.
Delta Aurigae, the northernmost bright star in Auriga, is a K0III-type star (K-type giant), 126 light-years from Earth and approximately 1.3 billion years old. It has a magnitude of 3.72, an absolute magnitude of 0.2, and a luminosity of . About 12 times the radius of the Sun, Delta weighs only two solar masses and rotates with a period of almost one year. Though it is often listed as a single star, it actually has three very widely spaced optical companions. One is a double star of magnitude 11, two arcminutes apart; the other is a star of magnitude 10, three arcminutes apart.
Lambda Aurigae (Al Hurr) is a G1.5IV-V-type star (G-type star intermediate between a subgiant and main-sequence star) of magnitude 4.71. It has an absolute magnitude of 4.4 and is 41 light-years from Earth. It has very weak emissions in the infrared spectrum, like Epsilon Aurigae. In photometric observations of Epsilon, an unusual variable, Lambda is commonly used as a comparison star. It is reaching the end of its hydrogen-fusing lifespan at an age of 6.2 billion years. It also has an unusually high radial velocity at 83 km/second. Though older than the Sun, it is similar in many ways; its mass is 1.07 solar masses, a radius of 1.3 solar radii, and a rotational period of 26 days. However, it differs from the Sun in its metallicity; its iron content is 1.15 times that of the Sun and it has relatively less nitrogen and carbon. Like Delta, it has several optical companions and is often categorized as a single star. The brightest companions are of magnitude 10, separated by 175 and 203 arcseconds. The dimmer companions are of magnitude 13 and 14, 87 and 310 arcseconds from Lambda, respectively.
Nu Aurigae is a G9.5III (G-type giant) star of magnitude 3.97, 230 light-years from Earth. It has a luminosity of and an absolute magnitude of 0.2. Nu is a giant star with a radius of 20–21 solar radii and a mass of approximately 3 solar masses. It may technically be a binary star; its companion, sometimes listed as optical and separated by 56 arcseconds, is a dwarf star of spectral type K6 and magnitude 11.4. Its period is more than 120,000 years and it orbits at least 3,700 AU from the primary.
Eclipsing binary stars
The most prominent variable star in Auriga is Epsilon Aurigae (Al Maz, Almaaz), an F0 class eclipsing binary star with an unusually long period of 27 years; its last minima occurred in 1982–1984 and 2009–2011. The distance to the system is disputed, variously cited as 4600 and 2,170 light-years. The primary is a white supergiant, and the secondary may be itself a binary star within a large dusty disk. Its maximum magnitude is 3.0, but it stays at a minimum magnitude of 3.8 for around a year; its most recent eclipse began in 2009. The primary has an absolute magnitude of −8.5 and an unusually high luminosity of , the reason it appears so bright at such a great distance. Epsilon Aurigae is the longest-period eclipsing binary currently known. The first observed eclipse of Epsilon Aurigae occurred in 1821, though its variable status was not confirmed until the eclipse of 1847–48. From that time forward, many theories were put forth as to the nature of the eclipsing component. Epsilon Aurigae has a noneclipsing component, which is visible as a 14th magnitude companion separated from the primary by 28.6 arcseconds. It was discovered by Sherburne Wesley Burnham in 1891 at the Dearborn Observatory, and is about 0.5 light-years from the primary.
Another eclipsing binary in Auriga, part of the Haedi asterism with Eta Aurigae, is Zeta Aurigae (Sadatoni), an eclipsing binary star at a distance of 776 light-years with a period of 2 years and 8 months. It has an absolute magnitude of −2.3. The primary is an orange-hued K5II-type star (K-type bright giant) and the secondary is a smaller blue star similar to Regulus; its period is 972 days. The secondary is a B7V-type star, a B-type main-sequence star. Zeta Aurigae's maximum magnitude is 3.7 and its minimum magnitude is 4.0. The full eclipse of the small blue star by the orange giant lasts 38 days, with two partial phases of 32 days at the beginning and end. The primary has a diameter of 150 D☉ and a luminosity of ; the secondary has a diameter of 4 D☉ and a luminosity of . Zeta Aurigae was spectroscopically determined to be a double star by Antonia Maury in 1897 and was confirmed as a binary star in 1908 by William Wallace Campbell. The two stars orbit each other about apart. Zeta Aurigae is moving away from Earth at a rate of per second. The second of the two Haedi or "Kids" is Eta Aurigae, a B3 class star located 243 light-years from Earth with a magnitude of 3.17. It is a B3V class star, meaning that it is a blue-white hued main-sequence star. Eta Aurigae has an absolute magnitude of −1.7 and a luminosity of . Eta Aurigae is moving away from Earth at a rate of per second.
T Aurigae (Nova Aurigae 1891) was a nova discovered at magnitude 5.0 on January 23, 1892, by Thomas David Anderson. It became visible to the naked eye by December 10, 1891, as shown on photographic plates examined after the nova's discovery. It then brightened by a factor of 2.5 from December 11 to December 20, when it reached a maximum magnitude of 4.4. T Aurigae faded slowly in January and February 1892, then faded quickly during March and April, reaching a magnitude of 15 in late April. However, its brightness began to increase in August, reaching magnitude 9.5, where it stayed until 1895. Over the subsequent two years, its brightness decreased to 11.5, and by 1903, it was approximately 14th magnitude. By 1925, it had reached its current magnitude of 15.5. When the nova was discovered, its spectrum showed material moving at a high speed towards Earth. However, when the spectrum was examined again in August 1892, it appeared to be a planetary nebula. Observations at the Lick Observatory by Edward Emerson Barnard showed it to be disc-shaped, with clear nebulosity in a diameter of 3 arcseconds. The shell had a diameter of 12 arcseconds in 1943. T Aurigae is classified as a slow nova, similar to DQ Herculis. Like DQ Herculis, WZ Sagittae, Nova Persei 1901 and Nova Aquilae 1918, it is a very close binary with a very short period. T Aurigae's period of 4.905 hours is comparable to DQ Herculis's period of 4.65 hours, and it has a partial eclipse period of 40 minutes.
Other variable stars
There are many other variable stars of different types in Auriga. ψ1 Aurigae (Dolones) is an orange-hued supergiant, which ranges between magnitudes 4.8 and 5.7, though not with a regular period. It has a spectral class of K5Iab, an average magnitude of 4.91, and an absolute magnitude of −5.7. Dolones is 3,976 light-years from Earth. RT Aurigae is a Cepheid variable which ranges between magnitudes 5.0 and 5.8 over a period of 3.7 days. A yellow-white supergiant, it lies at a distance of 1,600 light-years. It was discovered to be variable by English amateur T.H. Astbury in 1905. It has a spectral class of F81bv, meaning that it is an F-type supergiant star. RX Aurigae is a Cepheid variable as well; it varies in magnitude from a minimum of 8.0 to a maximum of 7.3; its spectral class is G0Iabv. It has a period of 11.62 days. RW Aurigae is the prototype of its class of irregular variable stars. Its variability was discovered in 1906 by Lydia Ceraski at the Moscow Observatory. RW Aurigae's spectrum indicates a turbulent stellar atmosphere, and has prominent emission lines of calcium and hydrogen. Its spectral type is G5V:e. SS Aurigae is an SS Cygni-type variable star, classified as an explosive dwarf. Discovered by Emil Silbernagel in 1907, it is almost always at its minimum magnitude of 15, but brightens to a maximum up to 60 times brighter than the minimum an average of every 55 days, though the period can range from 50 days to more than 100 days. It takes about 24 hours for the star to go from its minimum to maximum magnitude. SS Aurigae is a very close binary star with a period of 4 hours and 20 minutes. Both components are small subdwarf stars; there has been dispute in the scientific community about which star originates the outbursts. UU Aurigae is a variable red giant star at a distance of 2,000 light-years. It has a period of approximately 234 days and ranges between magnitudes 5.0 and 7.0.
AE Aurigae is a blue-hued main-sequence variable star. It is normally of magnitude 6.0, but its magnitude varies irregularly. AE Aurigae is associated with the 9-light-year-wide Flaming Star Nebula (IC 405), which it illuminates. However, AE Aurigae likely entered the nebula only recently, as determined through the discrepancy between the radial velocities of the star and the nebula, per second and per second, respectively. It has been hypothesized that AE Aurigae is a "runaway star" from the young cluster in the Orion Nebula, leaving the cluster approximately 2.7 million years ago. It is similar to 53 Arietis and Mu Columbae, other runaway stars from the Orion cluster. Its spectral class is O9.5Ve, meaning that it is an O-type main-sequence star. The Flaming Star Nebula, is located near IC 410 in the celestial sphere. IC 410 obtained its name from its appearance in long exposure astrophotographs; it has extensive filaments that make AE Aurigae appear to be on fire.
There are four Mira variable stars in Auriga: R Aurigae, UV Aurigae, U Aurigae, and X Aurigae, all of which are type M stars. More specifically, R Aurigae is of type M7III, UV Aurigae is of type C6 (a carbon star), U Aurigae is of type M9, and X Aurigae is of type K2. R Aurigae, with a period of 457.5 days, ranges in magnitude from a minimum of 13.9 to a maximum of 6.7. UV Aurigae, with a period of 394.4 days, ranges in magnitude from a minimum of 10.6 to a maximum of 7.4. U Aurigae, with a period of 408.1 days, ranges in magnitude from a minimum of 13.5 to a maximum of 7.5. X Aurigae, with a particularly short period of 163.8 days, ranges in magnitude from a minimum of 13.6 to a maximum of 8.0.
Binary and double stars
Auriga is home to several less prominent binary and double stars. Theta Aurigae (Bogardus, Mahasim) is a blue-white A0p class binary star of magnitude 2.62 with a luminosity of . It has an absolute magnitude of 0.1 and is 165 light-years from Earth. The secondary is a yellow star of magnitude 7.1, which requires a telescope of in aperture to resolve; the two stars are separated by 3.6 arcseconds. It is the eastern vertex of the constellation's pentagon. Theta Aurigae is moving away from Earth at a rate of per second. Theta Aurigae additionally has a second optical companion, discovered by Otto Wilhelm von Struve in 1852. The separation was at 52 arcseconds in 1978 and has been increasing since then because of the proper motion of Theta Aurigae, 0.1 arcseconds per year. The separation of this magnitude 9.2 component was 2.2 arcminutes (130.7 arcseconds) in 2007 with an angle of 350°. 4 Aurigae is a double star at a distance of 159 light-years. The primary is of magnitude 5.0 and the secondary is of magnitude 8.1. 14 Aurigae is a white optical binary star. The primary is of magnitude 5.0 and is at a distance of 270 light-years; the secondary is of magnitude 7.9 and is at a distance of 82 light-years. HD 30453 is spectroscopic binary of magnitude 5.9, with a spectral type assessed as either A8m or F0m, and a period of seven days.
Stars with planetary systems
There are several stars with confirmed planetary systems in Auriga; there is also a white dwarf with a suspected planetary system. HD 40979 has one planet, HD 40979 b. It was discovered in 2002 through radial velocity measurements on the parent star. HD 40979 is 33.3 parsecs from Earth, a spectral class F8V star of magnitude 6.74 — just past the limit of visibility to the naked eye. It is of similar size to the Sun, at 1.1 solar masses and 1.21 solar radii. The planet, with a mass of 3.83 Jupiter masses, orbits with a semi-major axis of 0.83 AU and a period of 263.1 days. HD 45350 has one planet as well. HD 45350 b was discovered through radial velocity measurements in 2004. It has a mass of 1.79 Jupiter masses and orbits every 890.76 days at a distance of 1.92 AU. Its parent star is faint, at an apparent magnitude of 7.88, a G5IV type star 49 parsecs away. It has a mass of 1.02 solar masses and a radius of 1.27 solar radii. HD 43691 b is a significantly larger planet, with a mass of 2.49 Jupiter masses; it is also far closer to its parent star, HD 43691. Discovered in 2007 from radial velocity measurements,
it orbits at a distance of 0.24 AU with a period of 36.96 days. HD 43691 has a radius identical to the Sun's, though it is more dense—its mass is 1.38 solar masses. It is a G0IV type star of magnitude 8.03, 93.2 parsecs from Earth.
HD 49674 is a star in Auriga with one planet orbiting it. This G3V type star is faint, at magnitude 8.1, and fairly distant, at 40.7 parsecs from Earth. Like the other stars, it is similar in size to the Sun, with a mass of 1.07 solar masses and a radius of 0.94 solar radii. Its planet, HD 49674 b, is a smaller planet, at 0.115 Jupiter masses. It orbits very close to its star, at 0.058 AU, every 4.94 days. HD 49674 b was discovered by radial velocity observations in 2002. HAT-P-9 b is the first transiting exoplanet confirmed in Auriga, orbiting the star HAT-P-9. Unlike the other exoplanets in Auriga, detected by radial velocity measurements, HAT-P-9 b was detected using the transit method in 2008. It has a mass of 0.67 Jupiter masses and orbits just 0.053 AU from its parent star, with a period of 3.92 days; its radius is 1.4 Jupiter radii, making it a hot Jupiter. Its parent star, HAT-P-9, is an F-type star approximately 480 parsecs from Earth. It has a mass of 1.28 solar masses and a radius of 1.32 solar radii.
The star KELT-2A (HD 42176A) is the brightest star in Auriga known to host a transiting exoplanet, KELT-2Ab, and is the fifth-brightest transit hosting star overall. The brightness of the star KELT-2A allows the mass and radius of the planet KELT-2Ab to be known quite precisely. KELT-2Ab is 1.524 Jupiter masses and 1.290 Jupiter radii and on a 4.11-day-long orbit, making it another hot Jupiter, similar to HAT-P-9b. The star KELT-2A is a late F-dwarf and is one member of the common-proper-motion binary star system KELT-2. KELT-2B is an early K-dwarf about 295 AU away, and was discovered the same time as the exoplanet.
Deep-sky objects
Auriga has the galactic anticenter, about 3.5° to the east of Beta Aurigae. This is the point on the celestial sphere opposite the Galactic Center; it is the edge of the galactic plane roughly nearest to the solar system. Ignoring nearby bright stars in the foreground this is a smaller and less luminous part of the Milky Way than looking towards the rest of its arms or central bar and has dust bands of the outer spiral arms. Auriga has many open clusters and other objects; rich star-forming arms of the Milky Way - including the Perseus Arm and the Orion–Cygnus Arm - run through it. The three brightest open clusters are M36, M37 and M38, all of which are visible in binoculars or a small telescope in suburban skies. A larger telescope resolves individual stars. Three other open clusters are NGC 2281, lying close to ψ7 Aurigae, NGC 1664, which is close to ε Aurigae, and IC 410 (surrounding NGC 1893), a cluster with nebulosity next to IC 405, the Flaming Star Nebula, found about midway between M38 and ι Aurigae. AE Aurigae, a runaway star, is a bright variable star currently within the Flaming Star Nebula.
M36 (NGC 1960) is a young galactic open cluster with approximately 60 stars, most of which are relatively bright; however, only about 40 stars are visible in most amateur instruments. It is at a distance of 3,900 light-years and has an overall magnitude of 6.0; it is 14 light-years wide. Its apparent diameter is 12.0 arcminutes. Of the three open clusters in Auriga, M36 is both the smallest and the most concentrated, though its brightest stars are approximately 9th magnitude. It was discovered in 1749 by Guillaume Le Gentil, the first of Auriga's major open clusters to be discovered. M36 features a 10-arcminute-wide knot of bright stars in its center, anchored by Struve 737, a double star with components separated by 10.7 arcseconds. Most of the stars in M36 are B type stars with rapid rates of rotation. M36's Trumpler class is given as both I 3 r and II 3 m. Besides the central knot, most of the cluster's other stars appear in smaller knots and groups.
M37 (NGC 2099) is an open cluster, larger than M36 and at a distance of 4,200 light-years. It has 150 stars, making it the richest cluster in Auriga; the most prominent member is an orange star that appears at the center. M37 is approximately 25 light-years in diameter. It is the brightest open cluster in Auriga with a magnitude of 5.6; it has an apparent diameter of 23.0 arcminutes. M37 was discovered in 1764 by Charles Messier, the first of many astronomers to laud its beauty. It was described as "a virtual cloud of glittering stars" by Robert Burnham, Jr. and Charles Piazzi Smyth commented that the star field was "strewed ...with sparkling gold-dust". The stars of M37 are older than those of M36; they are approximately 200 million years old. Most of the constituent stars are A type stars, though there are at least 12 red giants in the cluster as well. M37's Trumpler class is given as both I 2 r and II 1 r. The stars visible in a telescope range in magnitude from 9.0 to 13.0; there are two 9th magnitude stars in the center of the cluster and an east to west chain of 10th and 11th magnitude stars.
M38 is a diffuse open cluster at a distance of 3,900 light-years, the least concentrated of the three main open clusters in Auriga; it is classified as a Trumpler Class II 2 r or III 2 r cluster because of this. It appears as a cross-shaped or pi-shaped object in a telescope and contains approximately 100 stars; its overall magnitude is 6.4. M38, like M36, was discovered by Guillaume Le Gentil in 1749. It has an apparent diameter of approximately 20 arcseconds and a true diameter of about 25 light-years. Unlike M36 or M37, M38 has a varied stellar population. The majority of the population consists of A and B type main sequence stars, the B type stars being the oldest members, and a number of G type giant stars. One yellow-hued G type star is the brightest star in M38 at a magnitude of 7.9. The brightest stars in M38 are magnitude 9 and 10. M38 is accompanied by NGC 1907, a smaller and dimmer cluster that lies half a degree south-southwest of M38; it is at a distance of 4,200 light-years. The smaller cluster has an overall magnitude of 8.2 and a diameter of 6.0 arcminutes, making it about a third the size of M38. However, NGC 1907 is a rich cluster, classified as a Trumpler Class I 1 m n cluster. It has approximately 12 stars of magnitude 9–10, and at least 25 stars of magnitude 9–12.
IC 410, a faint nebula, is accompanied by the bright open cluster NGC 1893. The cluster is thin, with a diameter of 12 arcminutes and a population of approximately 20 stars. Its accompanying nebula has very low surface brightness, partially because of its diameter of 40 arcminutes. It appears in an amateur telescope with brighter areas in the north and south; the brighter southern patch shows a pattern of darker and lighter spots in a large instrument. NGC 1893, of magnitude 7.5, is classified as a Trumpler Class II 3 r n or II 2 m n cluster, meaning that it is not very large and is somewhat bright. The cluster possesses approximately 30 stars of magnitude 9–12. In an amateur instrument, IC 410 is only visible with an Oxygen-III filter. NGC 2281 is a small open cluster at a distance of 1,500 light-years. It contains 30 stars in a crescent shape. It has an overall magnitude of 5.4 and a fairly large diameter of 14.0 arcseconds, classified as a Trumpler Class I 3 m cluster. The brightest star in the cluster is magnitude 8; there are approximately 12 stars of magnitude 9–10 and 20 stars of magnitude 11–13.
NGC 1931 is a nebula in Auriga, slightly more than one degree to the west of M36. It is considered to be a difficult target for an amateur telescope. NGC 1931 has an approximate integrated magnitude of 10.1; it is 3 by 3 arcminutes. However, it appears to be elongated in an amateur telescope. Some observers may note a green hue in the nebula; a large telescope will easily show the nebula's "peanut" shape, as well as the quartet of stars that are engulfed by the nebula. The open cluster portion of NGC 1931 is classed as a I 3 p n cluster; the nebula portion is classed as both an emission and reflection nebula. NGC 1931 is approximately 6,000 light-years from Earth and could easily be confused with a comet in the eyepiece of a telescope.
NGC 1664 is a fairly large open cluster, with a diameter of 18 arcminutes, and moderately bright, with a magnitude of 7.6, comparable to several other open clusters in Auriga. One open cluster with a similar magnitude is NGC 1778, with a magnitude of 7.7. This small cluster has a diameter of 7 arcminutes and contains 25 stars. NGC 1857, a small cluster, is slightly brighter at magnitude 7.0. It has a diameter of 6 arcminutes and contains 40 stars, making it far more concentrated than the similar-sized NGC 1778. Far dimmer than the other open clusters is NGC 2126 at magnitude 10.2. Despite its dimness, NGC 2126 is as concentrated as NGC 1857, having 40 stars in a diameter of 6 arcminutes.
Meteor showers
Auriga is home to two meteor showers. The Aurigids, named for the entire constellation and formerly called the "Alpha Aurigids", are renowned for their intermittent outbursts, such as those in 1935, 1986, 1994, and 2007. They are associated with the comet Kiess (C/1911 N1), discovered in 1911 by Carl Clarence Kiess. The association was discovered after the outburst in 1935 by Cuno Hoffmeister and Arthur Teichgraeber. The Aurigid outburst on September 1, 1935, prompted the investigation of a connection with Comet Kiess, though the 24-year delay between the comet's return caused doubt in the scientific community. However, the outburst in 1986 erased much of this doubt. Istvan Teplickzky, a Hungarian amateur meteor observer, observed many bright meteors radiating from Auriga in a fashion very similar to the confirmed 1935 outburst. Because the position of Teplickzky's observed radiant and the 1935 radiant were close to the position of Comet Kiess, the comet was confirmed as the source of the Aurigid meteor stream.
The Aurigids had a spectacular outburst in 1994, when many grazing meteors—those that have a shallow angle of entry and seem to rise from the horizon—were observed in California. The meteors were tinted blue and green, moved slowly, and left trails at least 45° long. Because they had such a shallow angle of entry, some 1994 Aurigids lasted up to 2 seconds. Though there were only a few visual observers for part of the outburst, the 1994 Aurigids peak, which lasted less than two hours, was later confirmed by Finnish amateur radio astronomer Ilkka Yrjölä. The connection with Comet Kiess was finally confirmed in 1994. The 2007 outburst of the Aurigids was predicted by Peter Jenniskens and was observed by astronomers worldwide. Despite some predictions that there would be no Alpha Aurigid outburst, many bright meteors were observed throughout the shower, which peaked on September 1 as predicted. Much like in the 1994 outburst, the 2007 Aurigids were very bright and often colored blue and green. The maximum zenithal hourly rate was 100 meteors per hour, observed at 4:15 am, California time (12:15 UTC) by a team of astronomers flying on NASA planes.
The Aurigids are normally a placid Class II meteor shower that peaks in the early morning hours of September 1, beginning on August 28 every year. Though the maximum zenithal hourly rate is 2–5 meteors per hour, the Aurigids are fast, with an entry velocity of /sec. The annual Aurigids have a radiant located about two degrees north of Theta Aurigae, a third-magnitude star in the center of the constellation. The Aurigids end on September 4. Some years, the maximum rate has reached 9–30 meteors per hour.
The other meteor showers radiating from Auriga are far less prominent and capricious than the Alpha Aurigids. The Zeta Aurigids are a weak shower with a northern and southern branch lasting from December 11 to January 21. The shower peaks on January 1 and has very slow meteors, with a maximum rate of 1–5 meteors per hour. It was discovered by William Denning in 1886 and was discovered to be the source of rare fireballs by Alexander Stewart Herschel. There is another faint stream of meteors called the "Aurigids", unrelated to the September shower. This shower lasts from January 31 to February 23, peaking from February 5 through February 10; its slow meteors peak at a rate of approximately 2 per hour. The Delta Aurigids are a faint shower radiating from Auriga. It was discovered by a group of researchers at New Mexico State University and has a very low peak rate. The Delta Aurigids last from September 22 through October 23, peaking between October 6 and October 15. They may be related to the September Epsilon Perseids, though they are more similar to the Coma Berenicids in that the Delta Aurigids last longer and have a dearth of bright meteors. They too have a hypothesized connection to an unknown short period retrograde comet. The Iota Aurigids are a hypothesized shower occurring in mid-November; its parent body may be the asteroid 2000 NL10, but this connection is highly disputed. The hypothesized Iota Aurigids may instead be a faint stream of Taurids.
| Physical sciences | Other | Astronomy |
210242 | https://en.wikipedia.org/wiki/Joint | Joint | A joint or articulation (or articular surface) is the connection made between bones, ossicles, or other hard structures in the body which link an animal's skeletal system into a functional whole. They are constructed to allow for different degrees and types of movement. Some joints, such as the knee, elbow, and shoulder, are self-lubricating, almost frictionless, and are able to withstand compression and maintain heavy loads while still executing smooth and precise movements. Other joints such as sutures between the bones of the skull permit very little movement (only during birth) in order to protect the brain and the sense organs. The connection between a tooth and the jawbone is also called a joint, and is described as a fibrous joint known as a gomphosis. Joints are classified both structurally and functionally.
Joints play a vital role in the human body, contributing to movement, stability, and overall function. They are essential for mobility and flexibility, connecting bones and facilitating a wide range of motions, from simple bending and stretching to complex actions like running and jumping. Beyond enabling movement, joints provide structural support and stability to the skeleton, helping to maintain posture, balance, and the ability to bear weight during daily activities.
The clinical significance of joints is highlighted by common disorders that affect their health and function. Osteoarthritis, a degenerative joint disease, involves the breakdown of cartilage, leading to pain, stiffness, and reduced mobility. Rheumatoid arthritis, an autoimmune disorder, causes chronic inflammation in the joints, often resulting in swelling, pain, and potential deformity. Another prevalent condition, gout, arises from the accumulation of uric acid crystals in the joints, triggering severe pain and inflammation.
Joints also hold diagnostic importance, as their condition can indicate underlying health issues. Symptoms such as joint pain and swelling may signal inflammatory diseases, infections, or metabolic disorders. Effective treatment and management of joint-related conditions often require a multifaceted approach, including physical therapy, medications, lifestyle changes, and, in severe cases, surgical interventions. Preventive care, such as regular exercise, a balanced diet, and avoiding excessive strain, is critical for maintaining joint health, preventing disorders, and improving overall quality of life.
Classification
The number of joints depends on if sesamoids are included, age of the human and the definition of joints. However, the number of sesamoids is the same in most people with variations being rare.
Joints are mainly classified structurally and functionally. Structural classification is determined by how the bones connect to each other, while functional classification is determined by the degree of movement between the articulating bones. In practice, there is significant overlap between the two types of classifications.
Clinical, numerical classification
monoarticular – concerning one joint
oligoarticular or pauciarticular – concerning 2–4 joints
polyarticular – concerning 5 or more joints
Structural classification (binding tissue)
Structural classification names and divides joints according to the type of binding tissue that connects the bones to each other. There are four structural classifications of joints:
fibrous joint – joined by dense regular connective tissue that is rich in collagen fibers
cartilaginous joint – joined by cartilage. There are two types: primary cartilaginous joints composed of hyaline cartilage, and secondary cartilaginous joints composed of hyaline cartilage covering the articular surfaces of the involved bones with fibrocartilage connecting them.
synovial joint – not directly joined – the bones have a synovial cavity and are united by the dense irregular connective tissue that forms the articular capsule that is normally associated with accessory ligaments.
facet joint – joint between two articular processes between two vertebrae.
Functional classification (movement)
Joints can also be classified functionally according to the type and degree of movement they allow: Joint movements are described with reference to the basic anatomical planes.
synarthrosis – permits little or no mobility. Most synarthrosis joints are fibrous joints, such as skull sutures. This lack of mobility is important, because the skull bones serve to protect the brain.
amphiarthrosis – permits slight mobility. Most amphiarthrosis joints are cartilaginous joints. An example is the intervertebral disc. Individual intervertebral discs allow for small movements between adjacent vertebrae, but when added together, the vertebral column provides the flexibility that allows the body to twist, or bend to the front, back, or side.
synovial joint (also known as a diarthrosis) – freely movable. Synovial joints can in turn be classified into six groups according to the type of movement they allow: plane joint, ball and socket joint, hinge joint, pivot joint, condyloid joint and saddle joint.
Joints can also be classified, according to the number of axes of movement they allow, into nonaxial (gliding, as between the proximal ends of the ulna and radius), monoaxial (uniaxial), biaxial and multiaxial. Another classification is according to the degrees of freedom allowed, and distinguished between joints with one, two or three degrees of freedom. A further classification is according to the number and shapes of the articular surfaces: flat, concave and convex surfaces. Types of articular surfaces include trochlear surfaces.
Biomechanical classification
Joints can also be classified based on their anatomy or on their biomechanical properties. According to the anatomic classification, joints are subdivided into simple and compound, depending on the number of bones involved, and into complex and combination joints:
Simple joint: two articulation surfaces (e.g. shoulder joint, hip joint)
Compound joint: three or more articulation surfaces (e.g. radiocarpal joint)
Complex joint: two or more articulation surfaces and an articular disc or meniscus (e.g. knee joint)
Anatomical
The joints may be classified anatomically into the following groups:
Joints of hand
Elbow joints
Wrist joints
Axillary joints
Sternoclavicular joints
Vertebral articulations
Temporomandibular joints
Sacroiliac joints
Hip joints
Knee joints
Articulations of foot
Unmyelinated nerve fibers are abundant in joint capsules and ligaments, as well as in the outer part of intra-articular menisci. These nerve fibers are responsible for pain perception when a joint is strained.
Clinical significance
Damaging the cartilage of joints (articular cartilage) or the bones and muscles that stabilize the joints can lead to joint dislocations and osteoarthritis. Swimming is a great way to exercise the joints with minimal damage.
A joint disorder is termed arthropathy, and when involving inflammation of one or more joints the disorder is called arthritis. Most joint disorders involve arthritis, but joint damage by external physical trauma is typically not termed arthritis.
Arthropathies are called polyarticular (multiarticular) when involving many joints and monoarticular when involving only a single joint.
Arthritis is the leading cause of disability in people over the age of 55. There are many different forms of arthritis, each of which has a different cause. The most common form of arthritis, osteoarthritis (also known as degenerative joint disease), occurs following trauma to the joint, following an infection of the joint or simply as a result of aging and the deterioration of articular cartilage. Furthermore, there is emerging evidence that abnormal anatomy may contribute to early development of osteoarthritis. Other forms of arthritis are rheumatoid arthritis and psoriatic arthritis, which are autoimmune diseases in which the body is attacking itself. Septic arthritis is caused by joint infection. Gouty arthritis is caused by deposition of uric acid crystals in the joint that results in subsequent inflammation. Additionally, there is a less common form of gout that is caused by the formation of rhomboidal-shaped crystals of calcium pyrophosphate. This form of gout is known as pseudogout.
Temporomandibular joint syndrome (TMJ) involves the jaw joints and can cause facial pain, clicking sounds in the jaw, or limitation of jaw movement, to name a few symptoms. It is caused by psychological tension and misalignment of the jaw (malocclusion), and may be affecting as many as 75 million Americans.
History
Etymology
The English word joint is a past participle of the verb join, and can be read as joined. Joint is derived from Latin iunctus, past participle of the Latin verb iungere, join, unite, connect, attach.
The English term articulation is derived from Latin articulatio.
Humans have also developed lighter, more fragile joint bones over time due to the decrease in physical activity compared to thousands of years ago.
| Biology and health sciences | Skeletal system | null |
210269 | https://en.wikipedia.org/wiki/Intervertebral%20disc | Intervertebral disc | An intervertebral disc (British English), also spelled intervertebral disk (American English), lies between adjacent vertebrae in the vertebral column. Each disc forms a fibrocartilaginous joint (a symphysis), to allow slight movement of the vertebrae, to act as a ligament to hold the vertebrae together, and to function as a shock absorber for the spine.
Structure
Intervertebral discs consist of an outer fibrous ring, the anulus (or annulus) fibrosus disci intervertebralis, which surrounds an inner gel-like center, the nucleus pulposus.
The anulus fibrosus consists of several layers (laminae) of fibrocartilage made up of both type I and type II collagen. Type I is concentrated toward the edge of the ring, where it provides greater strength. The stiff laminae can withstand compressive forces.
The fibrous intervertebral disc contains the nucleus pulposus and this helps to distribute pressure evenly across the disc. This prevents the development of stress concentrations which could cause damage to the underlying vertebrae or to their endplates. The nucleus pulposus contains loose fibers suspended in a mucoprotein gel. The nucleus of the disc acts as a shock absorber, absorbing the impact of the body's activities and keeping the two vertebrae separated. It is the remnant of the notochord.
There is one disc between each pair of vertebrae, except for the first cervical segment, the atlas. The atlas is a ring around the roughly cone-shaped extension of the axis (second cervical segment). The axis acts as a post around which the atlas can rotate, allowing the neck to swivel. There are 23 discs in the human spine: 6 in the neck (cervical) region, 12 in the middle back (thoracic) region, and 5 in the lower back (lumbar) region.
Discs are named by the vertebral body above and below. For example, the disc between the fifth and sixth cervical vertebrae is designated "C5-6".
Development
During development and at birth, vertebral discs have some vascular supply to the cartilage endplates and the anulus fibrosus. These quickly deteriorate leaving almost no direct blood supply in healthy adults.
Intervertebral disc space
The intervertebral disc space is typically defined on an X-ray photograph as the space between adjacent vertebrae. In healthy patients, this corresponds to the size of the intervertebral disc. The size of the space can be altered in pathological conditions such as discitis (infection of the intervertebral disc).
Function
The intervertebral disc functions to separate the vertebrae from each other and provides the surface for the shock-absorbing gel of the nucleus pulposus. The nucleus pulposus of the disc functions to distribute hydraulic pressure in all directions within each intervertebral disc under compressive loads. The nucleus pulposus consists of large vacuolated notochord cells, small chondrocyte-like cells, collagen fibrils, and aggrecan, a proteoglycan that aggregates by binding to hyaluronan. Attached to each aggrecan molecule are glycosaminoglycan (GAG) chains of chondroitin sulfate and keratan sulfate. Increasing the amount of negatively charged aggrecan increases oncotic pressure, resulting in a shift of extracellular fluid from the outside to the inside of the nucleus pulposus. The amount of glycosaminoglycans (and hence water) decreases with age and degeneration.
Clinical significance
Anything arising from the intervertebral disc may be termed discogenic in particular when referring to associated pain as discogenic pain.
Herniation
A spinal disc herniation, commonly referred to as a slipped disc, can happen when unbalanced mechanical pressures substantially deform the anulus fibrosus, allowing part of the nucleus to obtrude. These events can occur during peak physical performance, during traumas, or as a result of chronic deterioration (typically accompanied with poor posture), and has been associated with a Propionibacterium acnes infection. Both the deformed anulus and the gel-like material of the nucleus pulposus can be forced laterally or posteriorly, distorting local muscle function and putting pressure on the nearby nerve. This can give symptoms typical of nerve root entrapment, which can vary between paresthesia, numbness, chronic and/or acute pain, either locally or along the dermatome served by the entrapped nerve, loss of muscle tone and decreased homeostatic performance. The disc is not physically slipped; it bulges, usually in just one direction.
Another kind of herniation, of the nucleus pulposus, can happen as a result of the formation of Schmorl's nodes on the intervertebral disc. This is referred to as vertical disc herniation.
Degeneration
Before age 40, approximately 25% of people show evidence of disc degeneration at one or more levels. Beyond age 40, more than 60% of people show evidence of disc degeneration at one or more levels on magnetic resonance imaging (MRI). These degenerative changes are a normal part of the ageing process and do not correlate to pain.
One effect of aging and disc degeneration is that the nucleus pulposus begins to dehydrate and the concentration of proteoglycans in the matrix decreases, thus limiting the ability of the disc to absorb shock. This general shrinking of disc size is partially responsible for the common decrease in height as humans age. The anulus fibrosus also becomes weaker with age and has an increased risk of tearing. In addition, the cartilage endplates begin thinning, fissures begin to form, and there is sclerosis of the subchondral bone. Since the fissures are formed in the anulus fibrosus due to osteo-arthritic bones or degeneration in general, the inner nucleus pulposus can seep out and put pressure on any number of vertebral nerves. A herniated disc can cause mild to severe pain such as sciatica and treatment for herniated discs range from physical therapy to surgery.(see also: Intervertebral disc arthroplasty) Other degeneration of the vertebral column includes diffuse idiopathic skeletal hyperostosis (DISH) which is the calcification or ossification of the ligaments surrounding the vertebrae. This degeneration causes stiffness and sometimes even curvature in the lumbar and thoraco-lumbar spinal region.
Burgeoning evidence suggests that long-term running may mitigate age-related degeneration within lumbar intervertebral discs
Scoliosis
While this may not cause pain in some people, in others it may cause chronic pain. Other spinal disorders can affect the morphology of intervertebral discs. For example, patients with scoliosis commonly have calcium deposits (ectopic calcification) in the cartilage endplate and sometimes in the disc itself.
Herniated discs are also found to have a higher degree of cellular senescence than non-herniated discs. In addition to scoliosis, which is the lateral 'S' curvature of the spine, the fused vertebrae can also experience other abnormalities such as kyphosis (hunchback) which shows in old age, or lordosis (swayback), which is often present in pregnancy and obesity.
Etymology
The Latin word anulus means "little ring"; it is the diminutive of anus ("ring"). However, modern English also spells the word more phonetically annulus, as with the term annular eclipse, where the moon blocks the sun except for a bright ring around it.
Additional images
| Biology and health sciences | Skeletal system | Biology |
210294 | https://en.wikipedia.org/wiki/Ulna | Ulna | The ulna or ulnar bone (: ulnae or ulnas) is a long bone in the forearm stretching from the elbow to the wrist. It is on the same side of the forearm as the little finger, running parallel to the radius, the forearm's other long bone. Longer and thinner than the radius, the ulna is considered to be the smaller long bone of the lower arm. The corresponding bone in the lower leg is the fibula.
Structure
The ulna is a long bone found in the forearm that stretches from the elbow to the wrist, and when in standard anatomical position, is found on the medial side of the forearm. It is broader close to the elbow, and narrows as it approaches the wrist.
Close to the elbow, the ulna has a bony process, the olecranon process, a hook-like structure that fits into the olecranon fossa of the humerus. This prevents hyperextension and forms a hinge joint with the trochlea of the humerus. There is also a radial notch for the head of the radius, and the ulnar tuberosity to which muscles attach.
Close to the wrist, the ulna has a styloid process.
Near the elbow
Near the elbow, the ulna has two curved processes, the olecranon and the coronoid process; and two concave, articular cavities, the semilunar and radial notches.
The olecranon is a large, thick, curved eminence, situated at the upper and back part of the ulna. It is bent forward at the summit so as to present a prominent lip which is received into the olecranon fossa of the humerus in extension of the forearm. Its base is contracted where it joins the body and the narrowest part of the upper end of the ulna. Its posterior surface, directed backward, is triangular, smooth, subcutaneous, and covered by a bursa. Its superior surface is of quadrilateral form, marked behind by a rough impression for the insertion of the triceps brachii; and in front, near the margin, by a slight transverse groove for the attachment of part of the posterior ligament of the elbow joint. Its anterior surface is smooth, concave, and forms the upper part of the semilunar notch. Its borders present continuations of the groove on the margin of the superior surface; they serve for the attachment of ligaments: the back part of the ulnar collateral ligament medially, and the posterior ligament laterally. From the medial border a part of the flexor carpi ulnaris arises; while to the lateral border the anconeus is attached.
The coronoid process is a triangular eminence projecting forward from the upper and front part of the ulna. Its base is continuous with the body of the bone, and of considerable strength. Its apex is pointed, slightly curved upward, and in flexion of the forearm is received into the coronoid fossa of the humerus. Its upper surface is smooth, concave, and forms the lower part of the semilunar notch. Its antero-inferior surface is concave, and marked by a rough impression for the insertion of the brachialis. At the junction of this surface with the front of the body is a rough eminence, the tuberosity of the ulna, which gives insertion to a part of the brachialis; to the lateral border of this tuberosity the oblique cord is attached. Its lateral surface presents a narrow, oblong, articular depression, the radial notch. Its medial surface, by its prominent, free margin, serves for the attachment of part of the ulnar collateral ligament. At the front part of this surface is a small rounded eminence for the origin of one head of the flexor digitorum superficialis; behind the eminence is a depression for part of the origin of the flexor digitorum profundus; descending from the eminence is a ridge which gives origin to one head of the pronator teres. Frequently, the flexor pollicis longus arises from the lower part of the coronoid process by a rounded bundle of muscular fibers.
The semilunar notch is a large depression, formed by the olecranon and the coronoid process, and serving as articulation with the trochlea of the humerus. About the middle of either side of this notch is an indentation, which contracts it somewhat, and indicates the junction of the olecranon and the coronoid process. The notch is concave from above downward, and divided into a medial and a lateral portion by a smooth ridge running from the summit of the olecranon to the tip of the coronoid process. The medial portion is the larger, and is slightly concave transversely; the lateral is convex above, slightly concave below.
The radial notch is a narrow, oblong, articular depression on the lateral side of the coronoid process; it receives the circumferential articular surface of the head of the radius. It is concave from before backward, and its prominent extremities serve for the attachment of the annular ligament.
Body
The body of the ulna at its upper part is prismatic in form, and curved so as to be convex behind and lateralward; its central part is straight; its lower part is rounded, smooth, and bent a little lateralward. It tapers gradually from above downward, and has three borders and three surfaces.
Borders
The volar border (margo volaris; anterior border) begins above at the prominent medial angle of the coronoid process, and ends below in front of the styloid process. Its upper part, well-defined, and its middle portion, smooth and rounded, give origin to the flexor digitorum profundus; its lower fourth serves for the origin of the pronator quadratus. This border separates the volar from the medial surface.
The dorsal border (margo dorsalis; posterior border) begins above at the apex of the triangular subcutaneous surface at the back part of the olecranon, and ends below at the back of the styloid process; it is well-marked in the upper three-fourths, and gives attachment to an aponeurosis which affords a common origin to the flexor carpi ulnaris, the extensor carpi ulnaris, and the flexor digitorum profundus; its lower fourth is smooth and rounded. This border separates the medial from the dorsal surface.
The interosseous crest (crista interossea; external or interosseous border) begins above by the union of two lines, which converge from the extremities of the radial notch and enclose between them a triangular space for the origin of part of the supinator; it ends below at the head of the ulna. Its upper part is sharp, its lower fourth smooth and rounded. This crest gives attachment to the interosseous membrane, and separates the volar from the dorsal surface.
Surfaces
The volar surface (facies volaris; anterior surface), much broader above than below, is concave in its upper three-fourths, and gives origin to the flexor digitorum profundus; its lower fourth, also concave, is covered by the pronator quadratus. The lower fourth is separated from the remaining portion by a ridge, directed obliquely downward and medialward, which marks the extent of origin of the pronator quadratus. At the junction of the upper with the middle third of the bone is the nutrient canal, directed obliquely upward.
The dorsal surface (facies dorsalis; posterior surface) directed backward and lateralward, is broad and concave above; convex and somewhat narrower in the middle; narrow, smooth, and rounded below. On its upper part is an oblique ridge, which runs from the dorsal end of the radial notch, downward to the dorsal border; the triangular surface above this ridge receives the insertion of the anconeus, while the upper part of the ridge affords attachment to the supinator. Below this the surface is subdivided by a longitudinal ridge, sometimes called the perpendicular line, into two parts: the medial part is smooth, and covered by the extensor carpi ulnaris; the lateral portion, wider and rougher, gives origin from above downward to the supinator, the abductor pollicis longus, the extensor pollicis longus, and the extensor indicis proprius.
The medial surface (facies medialis; internal surface) is broad and concave above, narrow and convex below. Its upper three-fourths give origin to the flexor digitorum profundus; its lower fourth is subcutaneous.
Near the wrist
Near the wrist, the ulnar, with two eminences; the lateral and larger is a rounded, articular eminence, termed the head of the ulna; the medial, narrower and more projecting, is a non-articular eminence, the ulnar styloid process.
The head of the ulna presents an articular surface, part of which, of an oval or semilunar form, is directed downward, and articulates with the upper surface of the triangular articular disk which separates it from the wrist-joint; the remaining portion, directed lateralward, is narrow, convex, and received into the ulnar notch of the radius.
The styloid process projects from the medial and back part of the bone; it descends a little lower than the head, and its rounded end affords attachment to the ulnar collateral ligament of the wrist-joint.
The head is separated from the styloid process by a depression for the attachment of the apex of the triangular articular disk, and behind, by a shallow groove for the tendon of the extensor carpi ulnaris.
Microanatomy
The ulna is a long bone. The long, narrow medullary cavity of the ulna is enclosed in a strong wall of cortical tissue which is thickest along the interosseous border and dorsal surface. At the extremities the compact layer thins. The compact layer is continued onto the back of the olecranon as a plate of close spongy bone with lamellae parallel. From the inner surface of this plate and the compact layer below it trabeculae arch forward toward the olecranon and coronoid and cross other trabeculae, passing backward over the medullary cavity from the upper part of the shaft below the coronoid. Below the coronoid process there is a small area of compact bone from which trabeculae curve upward to end obliquely to the surface of the semilunar notch which is coated with a thin layer of compact bone. The trabeculae at the lower end have a more longitudinal direction.
Development
The ulna is ossified from three centers: one each for the body, the wrist end, and the elbow end, near the top of the olecranon.
Ossification begins near the middle of the body of the ulna, about the eighth week of fetal life, and soon extends through the greater part of the bone.
At birth, the ends are cartilaginous. About the fourth year or so, a center appears in the middle of the head, and soon extends into the ulnar styloid process. About the tenth year, a center appears in the olecranon near its extremity, the chief part of this process being formed by an upward extension of the body. The upper epiphysis joins the body about the sixteenth, the lower about the twentieth year.
Function
Joints
The ulna forms part of the wrist joint and elbow joints. Specifically, the ulna joins (articulates) with:
trochlea of the humerus, at the right side elbow as a hinge joint with semilunar trochlear notch of the ulna.
the radius, near the elbow as a pivot joint, this allows the radius to cross over the ulna in pronation.
the distal radius, where it fits into the ulnar notch.
the radius along its length via the interosseous membrane that forms a syndesmosis joint
Muscle attachments
Clinical significance
Fractures
Specific types of ulna fracture include:
Monteggia fracture - a fracture of the proximal third of the ulna with the dislocation of the head of the radius
Hume fracture - a fracture of the olecranon with an associated anterior dislocation of the radial head
Conservative management is possible for ulnar fractures when they are located in the distal two-thirds, only involve the shaft, with no shortening, less than 10° angulation and less than 50% displacement. In such cases, a cast should be applied that goes above the elbow.
Other animals
In four-legged animals, the radius is the main load-bearing bone of the lower forelimb, and the ulna is important primarily for muscular attachment. In many mammals, the ulna is partially or wholly fused with the radius, and may therefore not exist as a separate bone. However, even in extreme cases of fusion, such as in horses, the olecranon process is still present, albeit as a projection from the upper radius.
In birds and other dinosaurs, the ulna forms a surface of attachment for the secondary feathers. These often leave osteological evidence in the form of quill knobs, allowing for identification of feathers in fossils that otherwise lack integumentary information.
Gallery
| Biology and health sciences | Skeletal system | Biology |
210325 | https://en.wikipedia.org/wiki/Swallow | Swallow | The swallows, martins, and saw-wings, or Hirundinidae are a family of passerine songbirds found around the world on all continents, including occasionally in Antarctica. Highly adapted to aerial feeding, they have a distinctive appearance. The term "swallow" is used as the common name for Hirundo rustica in the UK and Ireland. Around 90 species of Hirundinidae are known, divided into 21 genera, with the greatest diversity found in Africa, which is also thought to be where they evolved as hole-nesters. They also occur on a number of oceanic islands. A number of European and North American species are long-distance migrants; by contrast, the West and South African swallows are nonmigratory.
This family comprises two subfamilies: Pseudochelidoninae (the river martins of the genus Pseudochelidon) and Hirundininae (all other swallows, martins, and saw-wings). In the Old World, the name "martin" tends to be used for the squarer-tailed species, and the name "swallow" for the more fork-tailed species; however, this distinction does not represent a real evolutionary separation. In the New World, "martin" is reserved for members of the genus Progne. (These two systems are responsible for the same species being called sand martin in the Old World and bank swallow in the New World.)
Taxonomy and systematics
The family Hirundinidae was introduced (as Hirundia) by the French polymath Constantine Samuel Rafinesque in 1815. The Hirundinidae are morphologically unique within the passerines, with molecular evidence placing them as a distinctive lineage within the Sylvioidea (Old World warblers and relatives). Phylogenetic analysis has shown that the family Hirundinidae is sister to the cupwings in the family Pnoepygidae. The two families diverged in the early Miocene around 22 million years ago.
Within the family, a clear division exists between the two subfamilies, the Pseudochelidoninae, which are composed of the two species of river martins, and the Hirundininae, into which the remaining species are placed. The division of the Hirundininae has been the source of much discussion, with various taxonomists variously splitting them into as many as 24 genera and lumping them into just 12. Some agreement exists that three core groups occur within the Hirundininae: the saw-wings of the genus Psalidoprocne, the core martins, and the swallows of the genus Hirundo and their allies. The saw-wings are the most basal of the three, with the other two clades being sister to each other. The phylogeny of the swallows is closely related to evolution of nest construction; the more basal saw-wings use burrows as nest, the core martins have both burrowing (in the Old World members) and cavity adoption (in New World members) as strategies, and the genus Hirundo and its allies use mud nests.
The genus level cladogram shown below is based on a molecular phylogenetic study by Drew Schield and collaborators that was published in 2024. The choice of genera and the number of species is taken from the list of birds maintained by Frank Gill, Pamela C. Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC).
Fossil record
The oldest known fossil swallow is Miochelidon eschata from the Early Miocene of Siberia; it is the only record of Hirundinidae from the Miocene. It is likely a basal member of the family.
Description
The Hirundinidae have an evolutionarily conservative body shape, which is similar across the clade, but is unlike that of other passerines. Swallows have adapted to hunting insects on the wing by developing a slender, streamlined body and long, pointed wings, which allow great maneuverability and endurance, as well as frequent periods of gliding. Their body shapes allow for very efficient flight; the metabolic rate of swallows in flight is 49–72% lower than equivalent passerines of the same size.
Swallows have two in each eye, giving them sharp lateral and frontal vision to help track prey. They also have relatively long eyes, with their length almost equaling their width. The long eyes allow for an increase in visual acuity without competing with the brain for space inside of the head. The morphology of the eye in swallows is similar to that of a raptor.
Like the unrelated swifts and nightjars, which hunt in a similar way, they have short bills, but strong jaws and a wide gape. Their body lengths range from about and their weight from about . The smallest species by weight may be the Fanti sawwing, at a mean body mass of while the purple martin and southern martin, which both weigh in excess of on average, rival one another as the heaviest swallows. The wings are long, pointed, and have nine primary feathers. The tail has 12 feathers and may be deeply forked, somewhat indented, or square-ended. A long tail increases maneuverability, and may also function as a sexual adornment, since the tail is frequently longer in males. In barn swallows, the tail of the male is 18% longer than those of the female, and females select mates on the basis of tail length.
Their legs are short, and their feet are adapted for perching rather than walking, as the front toes are partially joined at the base. Swallows are capable of walking and even running, but they do so with a shuffling, waddling gait. The leg muscles of the river martins (Pseudochelidon) are stronger and more robust than those of other swallows. The river martins have other characteristics that separate them from the other swallows. The structure of the syrinx is substantially different between the two subfamilies; and in most swallows, the bill, legs, and feet are dark brown or black, but in the river martins, the bill is orange-red and the legs and feet are pink.
The most common hirundine plumage is glossy dark blue or green above and plain or streaked underparts, often white or rufous. Species that burrow or live in dry or mountainous areas are often matte brown above (e.g. sand martin and crag martin). The sexes show limited or no sexual dimorphism, with longer outer tail feathers in the adult male probably being the most common distinction.
The chicks hatch naked and with closed eyes. Fledged juveniles usually appear as duller versions of the adult.
Distribution and habitat
The family has a worldwide cosmopolitan distribution, breeding on every continent except Antarctica. One species, the Pacific swallow, occurs as a breeding bird on a number of oceanic islands in the Pacific Ocean, the Mascarene martin breeds on Reunion and Mauritius in the Indian Ocean, and a number of migratory species are common vagrants to other isolated islands and even to some sub-Antarctic islands and Antarctica. Many species have enormous worldwide ranges, particularly the barn swallow, which breeds over most of the Northern Hemisphere and winters over most of the Southern Hemisphere.
The family uses a wide range of habitats. They are dependent on flying insects, and as these are common over waterways and lakes, they frequently feed over these, but they can be found in any open habitat, including grasslands, open woodland, savanna, marshes, mangroves, and scrubland, from sea level to high alpine areas. Many species inhabit human-altered landscapes, including agricultural land and even urban areas. Land-use changes have also caused some species to expand their range, most impressively the welcome swallow, which began to colonise New Zealand in the 1920s, started breeding in the 1950s, and is now a common landbird there.
Species breeding in temperate regions migrate during the winter when their insect prey populations collapse. Species breeding in more tropical areas are often more sedentary, although several tropical species are partial migrants or make shorter migrations. In antiquity, swallows were thought to have hibernated in a state of torpor, or even that they withdrew for the winter under water. Aristotle ascribed hibernation not only to swallows, but also to storks and kites. Hibernation of swallows was considered a possibility even by as acute an observer as Rev. Gilbert White, in his The Natural History and Antiquities of Selborne (1789, based on decades of observations). This idea may have been supported by the habit of some species to roost in some numbers in dovecotes, nests and other forms of shelter during harsh weather, and some species even entering torpor. There were several reports of suspected torpor in swallows from 1947, such as a 1970 report that white-backed swallows in Australia may conserve energy this way, but the first confirmed study that they or any passerine entered torpor was a 1988 study on house martins.
Behaviour and ecology
Swallows are excellent flyers and use these skills to feed and attract mates. Some species, such as the mangrove swallow, are territorial, whereas others are not and simply defend their nesting sites. In general, the male selects a nest site, and then attracts a female using song and flight and (dependent on the species) guards his territory. The size of the territory varies depending on the species of swallow; in colonial-nesting species, it tends to be small, but it may be much larger for solitary nesters. Outside the breeding season, some species may form large flocks, and species may also roost communally. This is thought to provide protection from predators, such as sparrowhawks and hobbies. These roosts can be enormous; one winter-roosting site of barn swallows in Nigeria attracted 1.5 million individuals. Nonsocial species do not form flocks, but recently fledged chicks may remain with their parents for a while after the breeding season. If a human being gets too close to their territory, swallows attack them within the perimeter of the nest. Colonial species may mob predators and humans that are too close to the colony.
Diet and feeding
For the most part, swallows are insectivorous, taking flying insects on the wing. Across the whole family, a wide range of insects is taken from most insect groups, but the composition of any one prey type in the diet varies by species and with the time of year. Individual species may be selective; they do not scoop up every insect around them, but instead select larger prey items than would be expected by random sampling. In addition, the ease of capture of different insect types affects their rate of predation by swallows. They also avoid certain prey types; in particular, stinging insects such as bees and wasps are generally avoided. In addition to insect prey, a number of species occasionally consume fruits and other plant matter. Species in Africa have been recorded eating the seeds of Acacia trees, and these are even fed to the young of the greater striped swallow.
The swallows generally forage for prey on the wing, but they on occasion snap prey off branches or on the ground. The flight may be fast and involve a rapid succession of turns and banks when actively chasing fast-moving prey; less agile prey may be caught with a slower, more leisurely flight that includes flying in circles and bursts of flapping mixed with gliding. Where several species of swallows feed together, they separate into different niches based on height off the ground, some species feeding closer to the ground and others feeding at higher levels. Similar separation occurs where feeding overlaps with swifts. Niche separation may also occur with the size of prey chosen.
Breeding
The more primitive species nest in existing cavities, for example in an old woodpecker nest, while other species excavate burrows in soft substrate such as sand banks. Swallows in the genera Hirundo, Ptyonoprogne, Cecropis, Petrochelidon, Atronanus and Delichon build mud nests close to overhead shelter in locations that are protected from both the weather and predators. The mud-nesters are most common in the Old World, particularly Africa, whereas cavity-nesters are more common in the New World. Mud-nesting species in particular are limited in areas of high humidity, which causes the mud nests to crumble. Many cave-, bank-, and cliff-dwelling species of swallows nest in large colonies. Mud nests are constructed by both males and females, and amongst the tunnel diggers, the excavation duties are shared, as well. In historical times, the introduction of man-made stone structures such as barns and bridges, together with forest clearance, has led to an abundance of colony sites around the globe, significantly increasing the breeding ranges of some species. Birds living in large colonies typically have to contend with both ectoparasites and conspecific nest parasitism. In barn swallows, old mated males and young unmated males benefit from colonial behaviour, whereas females and mated young males likely benefit more from nesting by themselves.
Pairs of mated swallows are monogamous, and pairs of nonmigratory species often stay near their breeding area all year, though the nest site is defended most vigorously during the breeding season. Migratory species often return to the same breeding area each year, and may select the same nest site if they were previously successful in that location. First-year breeders generally select a nesting site close to where they were raised. The breeding of temperate species is seasonal, whereas that of subtropical or tropical species can either be continuous throughout the year or seasonal. Seasonal species in the subtropics or tropics usually time their breeding to coincide with the peaks in insect activity, which is usually the wet season, but some species, such as the white-bibbed swallow, nest in the dry season to avoid flooding in their riverbank nesting habitat. All swallows defend their nests from egg predators, although solitary species are more aggressive towards predators than colonial species. Overall, the contribution of male swallows towards parental care is the highest of any passerine bird.
The eggs of swallows tend to be white, although those of some mud-nesters are speckled. The typical clutch size is around four to five eggs in temperate areas and two to three eggs in the tropics. The incubation duties are shared in some species, and in others the eggs are incubated solely by the females. Amongst the species where the males help with incubation, their contribution varies amongst species, with some species such as the cliff swallow sharing the duties equally and the female doing most of the work in others. Amongst the barn swallows, the male of the American subspecies helps (to a small extent), whereas the European subspecies does not. Even in species where the male does not incubate the eggs, he may sit on them when the female is away to reduce heat loss (this is different from incubation as that involves warming the eggs, not just stopping heat loss). Incubation stints last for 5–15 minutes and are followed by bursts of feeding activity. From laying, swallow eggs take 10–21 days to hatch, with 14–18 days being more typical.
The chicks of swallows hatch naked, generally with only a few tufts of down. The eyes are closed and do not fully open for up to 10 days. The feathers take a few days to begin to sprout, and the chicks are brooded by the parents until they are able to thermoregulate. On the whole, they develop slowly compared to other passerine birds. The parents do not usually feed the chicks individual insects, but instead feed a bolus of food comprising 10–100 insects. Regardless of whether the species has males that incubate or brood the chicks, the males of all hirundines help feed the chicks. When the young fledge is difficult to determine, as they are enticed out of the nest after three weeks by parents, but frequently return to the nest afterwards to roost.
Calls
Swallows are able to produce many different calls or songs, which are used to express excitement, to communicate with others of the same species, during courtship, or as an alarm when a predator is in the area. The songs of males are related to the body condition of the bird and are presumably used by females to judge the physical condition and suitability for mating of males. Begging calls are used by the young when soliciting food from their parents. The typical song of swallows is a simple, sometimes musical twittering.
Status and conservation
Species of hirundine that are threatened with extinction are generally endangered due to habitat loss. This is presumed to be the reason behind the decline of the critically endangered white-eyed river martin, a species that is only known from a few specimens collected in Thailand. The species presumably breeds in riverbanks, a much diminished habitat in Southeast Asia. As the species has not been reliably seen since 1980, it may already be extinct. Two insular species, the Bahama swallow and golden swallow, have declined due to forest loss and also competition with introduced species such as starlings and sparrows, which compete with these swallows for nesting sites. The golden swallow formerly bred on the island of Jamaica, but was last seen there in 1989 and is now restricted to the island of Hispaniola.
Relationship with humans
Swallows are tolerated by humans because of their beneficial role as insect eaters, and some species have readily adapted to nesting in and around human habitation. The barn swallow and house martin now rarely use natural sites. The purple martin is also actively encouraged by people to nest around humans and elaborate nest boxes are erected. Enough artificial nesting sites have been created that the purple martin now seldom nests in natural cavities in the eastern part of its range.
Because of the long human experience with these conspicuous species, many myths and legends have arisen as a consequence, particularly relating to the barn swallow. Roman historian Pliny the Elder described a use of painted swallows to deliver a report of the winning horses at a race. There is also the Korean folktale of Heungbu and Nolbu, which teaches a moral lesson about greed and altruism through the mending of a swallow's broken leg.
During the 19th century, Jean Desbouvrie attempted to tame swallows and train them for use as messenger birds, as an alternative to war pigeons. The swallows would have a light load of course, as a laden swallow could only travel about half as far as an unladen swallow in the same trip. He succeeded in curbing the migratory instinct in young birds and persuaded the government of France to conduct initial testing, but further experimentation stalled. Subsequent attempts to train homing behaviour into swallows and other passerines had difficulty establishing a statistically significant success rate, although the birds have been known to trap themselves in a cage repeatedly to get to the bait.
According to a sailing superstition, swallows are a good omen to those at sea. This probably arose from the fact that swallows are land-based birds, so their appearance informs a sailor that he or she is close to shore.
An old term of venery for swallows is a "flight" or "sweep".
Species list
The family contains 90 species in 21 genera.
| Biology and health sciences | Passerida | null |
1125496 | https://en.wikipedia.org/wiki/Pericyclic%20reaction | Pericyclic reaction | In organic chemistry, a pericyclic reaction is the type of organic reaction wherein the transition state of the molecule has a cyclic geometry, the reaction progresses in a concerted fashion, and the bond orbitals involved in the reaction overlap in a continuous cycle at the transition state. Pericyclic reactions stand in contrast to linear reactions, encompassing most organic transformations and proceeding through an acyclic transition state, on the one hand and coarctate reactions, which proceed through a doubly cyclic, concerted transition state on the other hand. Pericyclic reactions are usually rearrangement or addition reactions. The major classes of pericyclic reactions are given in the table below (the three most important classes are shown in bold). Ene reactions and cheletropic reactions are often classed as group transfer reactions and cycloadditions/cycloeliminations, respectively, while dyotropic reactions and group transfer reactions (if ene reactions are excluded) are rarely encountered.
In general, these are considered to be equilibrium processes, although it is possible to push the reaction in one direction by designing a reaction by which the product is at a significantly lower energy level; this is due to a unimolecular interpretation of Le Chatelier's principle. There is thus a set of "retro" pericyclic reactions.
Mechanism of pericyclic reaction
By definition, pericyclic reactions proceed through a concerted mechanism involving a single, cyclic transition state. Because of this, prior to a systematic understanding of pericyclic processes through the principle of orbital symmetry conservation, they were facetiously referred to as 'no-mechanism reactions'. However, reactions for which pericyclic mechanisms can be drawn often have related stepwise mechanisms proceeding through radical or dipolar intermediates that are also viable. Some classes of pericyclic reactions, such as the [2+2] ketene cycloaddition reactions, can be 'controversial' because their mechanism is sometimes not definitively known to be concerted (or may depend on the reactive system). Moreover, pericyclic reactions also often have metal-catalyzed analogs, although usually these are also not technically pericyclic, since they proceed via metal-stabilized intermediates, and therefore are not concerted.
Despite these caveats, the theoretical understanding of pericyclic reactions is probably among the most sophisticated and well-developed in all of organic chemistry. The understanding of how orbitals interact in the course of a pericyclic process has led to the Woodward–Hoffmann rules, a simple set of criteria to predict whether a pericyclic mechanism for a reaction is likely or favorable. For instance, these rules predict that the [4+2] cycloaddition of butadiene and ethylene under thermal conditions is likely a pericyclic process, while the [2+2] cycloaddition of two ethylene molecules is not. These are consistent with experimental data, supporting an ordered, concerted transition state for the former and a multistep radical process for the latter. Several equivalent approaches, outlined below, lead to the same predictions.
The aromatic transition state theory assumes that the minimum energy transition state for a pericyclic process is aromatic, with the choice of reaction topology determined by the number of electrons involved. For reactions involving (4n + 2)-electron systems (2, 6, 10, ... electrons; odd number of electron pairs), Hückel topology transition states are proposed, in which the reactive portion of the reacting molecule or molecules have orbitals interacting in a continuous cycle with an even number of nodes. In 4n-electron systems (4, 8, 12, ... electrons; even number of electron pairs) Möbius topology transition states are proposed, in which the reacting molecules have orbitals interacting in a twisted continuous cycle with an odd number of nodes. The corresponding (4n + 2)-electron Möbius and 4n-electron Hückel transition states are antiaromatic and are thus strongly disfavored. Aromatic transition state theory results in a particularly simply statement of the generalized Woodward–Hoffmann rules: A pericyclic reaction involving an odd number of electron pairs will proceed through a Hückel transition state (even number of antarafacial components in Woodward–Hoffmann terminology), while a pericyclic reaction involving an even number of electron pairs will proceed through a Möbius transition state (odd number of antarafacial components).
Equivalently, pericyclic reactions have been analyzed with correlation diagrams, which track the evolution of the molecular orbitals (known as 'correlating' the molecular orbitals) of the reacting molecules as they progress from reactants to products via a transition state, based on their symmetry properties. Reactions are favorable ('allowed') if the ground state of the reactants correlate with the ground state of the products, while they are unfavorable ('forbidden') if the ground state of the reactants correlate with an excited state of the products. This idea is known as the conservation of orbital symmetry. Consideration of the interactions of the highest occupied and lowest unoccupied molecular orbitals (frontier orbital analysis) is another approach to analyzing the transition state of a pericyclic reaction.
Arrow-pushing for pericyclic reactions
The arrow-pushing convention for pericyclic reactions has a somewhat different meaning compared to polar (and radical) reactions. For pericyclic reactions, there is often no obvious movement of electrons from an electron rich source to an electron poor sink. Rather, electrons are redistributed around a cyclic transition state. Thus, electrons can be pushed in either of two directions for a pericyclic reaction. For some pericyclic reactions, however, there is a definite polarization of charge at the transition state due to asynchronicity (bond formation and breaking do not occur to a uniform extent at the transition state). Thus, one direction may be preferred over another, although arguably, both depictions are still formally correct. In the case of the Diels-Alder reaction shown below, resonance arguments make clear the direction of polarization. In more complex situations, however, detailed computations may be needed to determine the direction and extent of polarization.
Pseudopericyclic processes
Closely related to pericyclic processes are reactions that are pseudopericyclic. Although a pseudopericyclic reaction proceeds through a cyclic transition state, two of the orbitals involved are constrained to be orthogonal and cannot interact. Perhaps the most famous example is the hydroboration of an olefin. Although this appears to be a 4-electron Hückel topology forbidden group transfer process, the empty p orbital and sp2 hybridized B–H bond are orthogonal and do not interact. Hence, the Woodward-Hoffmann rules do not apply. (The fact that hydroboration is believed to proceed through initial π complexation may also be relevant.)
In biochemistry
Pericyclic reactions also occur in several biological processes:
Claisen rearrangement of chorismate to prephenate in almost all prototrophic organisms
[1,5]-sigmatropic shift in the transformation of precorrin-8x to hydrogenobyrinic acid
non-enzymatic, photochemical electrocyclic ring opening and a (1,7) sigmatropic hydride shift in vitamin D synthesis
Isochorismate pyruvate lyase catalyzed conversion of isochorismate into salicylate and pyruvate
| Physical sciences | Organic reactions | Chemistry |
1126212 | https://en.wikipedia.org/wiki/Ziehl%E2%80%93Neelsen%20stain | Ziehl–Neelsen stain | The Ziehl-Neelsen stain, also known as the acid-fast stain, is a bacteriological staining technique used in cytopathology and microbiology to identify acid-fast bacteria under microscopy, particularly members of the Mycobacterium genus. This staining method was initially introduced by Paul Ehrlich (1854–1915) and subsequently modified by the German bacteriologists Franz Ziehl (1859–1926) and Friedrich Neelsen (1854–1898) during the late 19th century.
The acid-fast staining method, in conjunction with auramine phenol staining, serves as the standard diagnostic tool and is widely accessible for rapidly diagnosing tuberculosis (caused by Mycobacterium tuberculosis) and other diseases caused by atypical mycobacteria, such as leprosy (caused by Mycobacterium leprae) and Mycobacterium avium-intracellulare infection (caused by Mycobacterium avium complex) in samples like sputum, gastric washing fluid, and bronchoalveolar lavage fluid. These acid-fast bacteria possess a waxy lipid-rich outer layer that contains high concentrations of mycolic acid, rendering them resistant to conventional staining techniques like the Gram stain.
After the Ziehl-Neelsen staining procedure using carbol fuchsin, acid-fast bacteria are observable as vivid red or pink rods set against a blue or green background, depending on the specific counterstain used, such as methylene blue or malachite green, respectively. Non-acid-fast bacteria and other cellular structures will be colored by the counterstain, allowing for clear differentiation.
Mycobacteria
In anatomic pathology specimens, immunohistochemistry and modifications of Ziehl–Neelsen staining (such as Fite-Faraco staining) have comparable diagnostic utility in identifying Mycobacterium. Both of them are superior to traditional Ziehl–Neelsen stain.
Mycobacterium are slow-growing rod-shaped bacilli that are slightly curved or straight, and are considered to be Gram positive. Some mycobacteria are free-living saprophytes, but many are pathogens that cause disease in animals and humans. Mycobacterium bovis causes tuberculosis in cattle. Since tuberculosis can be spread to humans, milk is pasteurized to kill any of the bacteria. Mycobacterium tuberculosis that causes tuberculosis (TB) in humans is an airborne bacterium that typically infects the human lungs. Testing for TB includes blood testing, skin tests, and chest X-rays. When looking at the smears for TB, it is stained using an acid-fast stain. These acid-fast organisms like Mycobacterium contain large amounts of lipid substances within their cell walls called mycolic acids. These acids resist staining by ordinary methods such as a Gram stain. It can also be used to stain a few other bacteria, such as Nocardia. The reagents used for Ziehl–Neelsen staining are carbol fuchsin, acid alcohol, and methylene blue. Acid-fast bacilli are bright red after staining.
Fungi
Ziehl–Neelsen staining is a type of narrow spectrum fungal stain. Narrow spectrum fungal stains are selective, and they can help differentiate and identify fungi. The results of Ziehl–Neelsen staining is variable because many fungal cell walls are not acid fast. An example of a common type of acid-fast fungus that is usually stained with Ziehl–Neelsen staining is called Histoplasma (HP). Histoplasma is found in soil and the feces of birds and bats. Humans can contract histoplasmosis by inhalation of the fungal spores. Histoplasma enters the body and goes to the lungs where the spores turn into yeast. The yeast gets into the blood stream and affects lymph nodes and other parts of the body. Usually people do not get sick from inhaling the spores, but if they do they usually have flu like symptoms. Another variation on this staining method is used in mycology to differentially stain acid-fast incrustations in the cuticular hyphae of certain species of fungi in the genus Russula. Some free endospores can be confused with small yeasts, so staining is used to identify the unknown fungi. It is also useful in the identification of some protozoa, namely Cryptosporidium and Isospora. The Ziehl–Neelsen stain can also hinder diagnosis in the case of paragonimiasis because the eggs in sputum sample for ovum and parasite (O&P) can be dissolved by the stain
History
In 1882 Robert Koch discovered the etiology of tuberculosis. Soon after Koch's discovery, Paul Ehrlich developed a stain for mycobacterium tuberculosis, called the alum hematoxylin stain. Franz Ziehl then altered Ehrlich's staining technique by using carbolic acid as the mordant. Friedrich Neelsen kept Ziehl's choice of mordant but changed the primary stain to carbol fuchsin. Ziehl and Neelsen's modifications together have developed the Ziehl–Neelsen stain. Another acid-fast stain was developed by Joseph Kinyoun by using the Ziehl–Neelsen staining technique but removing the heating step from the procedure. This new stain from Kinyoun was named the Kinyoun stain.
Procedure
A typical AFB stain procedure involves dropping the cells in suspension onto a slide, then air drying the liquid and heat fixing the cells.
Studies have shown that an AFB stain without a culture has a poor negative predictive value. An AFB culture should be performed along with an AFB stain; this has a much higher negative predictive value.
Mechanism explanation
The mechanism of action of the Ziehl-Neelsen stain is not completely understood, but it is thought to involve a chemical reaction between the acidic dyes and the cell walls of the bacteria. The acidity of the dyes causes them to bind more strongly to the cell walls of the bacteria than to other cells or tissues. This results in the selective staining of only those cells that have a high density of cell wall material, such as acid-fast bacteria.
The Ziehl-Neelsen stain is a two step staining process. In the first step, the tissue is stained with a basic fuchsin solution, which stains all cells pink. In the second step, the tissue is incubated in an acid alcohol solution, which decolorizes all cells except for acid-fast cells, which retain the color and appeared as red. The mechanisms by which this color is produced are not well understood, but it is thought that the interaction of the basic fuchsin with the cell wall components of bacteria creates a new molecule that is responsible for the color.
Modifications
1% sulfuric acid alcohol for actinomycetes, nocardia.
0.5–1% sulfuric acid alcohol for oocysts of isospora, cyclospora.
0.25–0.5% sulfuric acid alcohol for bacterial endospores.
Differential staining – glacial acetic acid used, no heat applied, secondary stain is Loeffler's methylene blue.
Kinyoun modification (or cold Ziehl–Neelsen technique) is also available.
A protocol in which a detergent is substituted for the highly toxic phenol in the fuchsin staining solution.
| Biology and health sciences | Basics_3 | Biology |
1126638 | https://en.wikipedia.org/wiki/Invariant%20%28mathematics%29 | Invariant (mathematics) | In mathematics, an invariant is a property of a mathematical object (or a class of mathematical objects) which remains unchanged after operations or transformations of a certain type are applied to the objects. The particular class of objects and type of transformations are usually indicated by the context in which the term is used. For example, the area of a triangle is an invariant with respect to isometries of the Euclidean plane. The phrases "invariant under" and "invariant to" a transformation are both used. More generally, an invariant with respect to an equivalence relation is a property that is constant on each equivalence class.
Invariants are used in diverse areas of mathematics such as geometry, topology, algebra and discrete mathematics. Some important classes of transformations are defined by an invariant they leave unchanged. For example, conformal maps are defined as transformations of the plane that preserve angles. The discovery of invariants is an important step in the process of classifying mathematical objects.
Examples
A simple example of invariance is expressed in our ability to count. For a finite set of objects of any kind, there is a number to which we always arrive, regardless of the order in which we count the objects in the set. The quantity—a cardinal number—is associated with the set, and is invariant under the process of counting.
An identity is an equation that remains true for all values of its variables. There are also inequalities that remain true when the values of their variables change.
The distance between two points on a number line is not changed by adding the same quantity to both numbers. On the other hand, multiplication does not have this same property, as distance is not invariant under multiplication.
Angles and ratios of distances are invariant under scalings, rotations, translations and reflections. These transformations produce similar shapes, which is the basis of trigonometry. In contrast, angles and ratios are not invariant under non-uniform scaling (such as stretching). The sum of a triangle's interior angles (180°) is invariant under all the above operations. As another example, all circles are similar: they can be transformed into each other and the ratio of the circumference to the diameter is invariant (denoted by the Greek letter π (pi)).
Some more complicated examples:
The real part and the absolute value of a complex number are invariant under complex conjugation.
The tricolorability of knots.
The degree of a polynomial is invariant under a linear change of variables.
The dimension and homology groups of a topological object are invariant under homeomorphism.
The number of fixed points of a dynamical system is invariant under many mathematical operations.
Euclidean distance is invariant under orthogonal transformations.
Area is invariant under linear maps which have determinant ±1 (see ).
Some invariants of projective transformations include collinearity of three or more points, concurrency of three or more lines, conic sections, and the cross-ratio.
The determinant, trace, eigenvectors, and eigenvalues of a linear endomorphism are invariant under a change of basis. In other words, the spectrum of a matrix is invariant under a change of basis.
The principal invariants of tensors do not change with rotation of the coordinate system (see Invariants of tensors).
The singular values of a matrix are invariant under orthogonal transformations.
Lebesgue measure is invariant under translations.
The variance of a probability distribution is invariant under translations of the real line. Hence the variance of a random variable is unchanged after the addition of a constant.
The fixed points of a transformation are the elements in the domain that are invariant under the transformation. They may, depending on the application, be called symmetric with respect to that transformation. For example, objects with translational symmetry are invariant under certain translations.
The integral of the Gaussian curvature of a two-dimensional Riemannian manifold is invariant under changes of the Riemannian metric . This is the Gauss–Bonnet theorem.
MU puzzle
The MU puzzle is a good example of a logical problem where determining an invariant is of use for an impossibility proof. The puzzle asks one to start with the word MI and transform it into the word MU, using in each step one of the following transformation rules:
If a string ends with an I, a U may be appended (xI → xIU)
The string after the M may be completely duplicated (Mx → Mxx)
Any three consecutive I's (III) may be replaced with a single U (xIIIy → xUy)
Any two consecutive U's may be removed (xUUy → xy)
An example derivation (with superscripts indicating the applied rules) is
MI →2 MII →2 MIIII →3 MUI →2 MUIUI →1 MUIUIU →2 MUIUIUUIUIU →4 MUIUIIUIU → ...
In light of this, one might wonder whether it is possible to convert MI into MU, using only these four transformation rules. One could spend many hours applying these transformation rules to strings. However, it might be quicker to find a property that is invariant to all rules (that is, not changed by any of them), and that demonstrates that getting to MU is impossible. By looking at the puzzle from a logical standpoint, one might realize that the only way to get rid of any I's is to have three consecutive I's in the string. This makes the following invariant interesting to consider:
The number of I's in the string is not a multiple of 3.
This is an invariant to the problem, if for each of the transformation rules the following holds: if the invariant held before applying the rule, it will also hold after applying it. Looking at the net effect of applying the rules on the number of I's and U's, one can see this actually is the case for all rules:
{| class=wikitable
|-
! Rule !! #I's !! #U's !! Effect on invariant
|-
| style="text-align: center;" | 1 || style="text-align: right;" | +0 || style="text-align: right;" | +1 || Number of I's is unchanged. If the invariant held, it still does.
|-
| style="text-align: center;" | 2 || style="text-align: right;" | ×2 || style="text-align: right;" | ×2 || If n is not a multiple of 3, then 2×n is not either. The invariant still holds.
|-
| style="text-align: center;" | 3 || style="text-align: right;" | −3 || style="text-align: right;" | +1 || If n is not a multiple of 3, n−3 is not either. The invariant still holds.
|-
| style="text-align: center;" | 4 || style="text-align: right;" | +0 || style="text-align: right;" | −2 || Number of I's is unchanged. If the invariant held, it still does.
|}
The table above shows clearly that the invariant holds for each of the possible transformation rules, which means that whichever rule one picks, at whatever state, if the number of I's was not a multiple of three before applying the rule, then it will not be afterwards either.
Given that there is a single I in the starting string MI, and one is not a multiple of three, one can then conclude that it is impossible to go from MI to MU (as the number of I's will never be a multiple of three).
Invariant set
A subset S of the domain U of a mapping T: U → U is an invariant set under the mapping when Note that the elements of S are not fixed, even though the set S is fixed in the power set of U. (Some authors use the terminology setwise invariant, vs. pointwise invariant, to distinguish between these cases.)
For example, a circle is an invariant subset of the plane under a rotation about the circle's center. Further, a conical surface is invariant as a set under a homothety of space.
An invariant set of an operation T is also said to be stable under T. For example, the normal subgroups that are so important in group theory are those subgroups that are stable under the inner automorphisms of the ambient group.
In linear algebra, if a linear transformation T has an eigenvector v, then the line through 0 and v is an invariant set under T, in which case the eigenvectors span an invariant subspace which is stable under T.
When T is a screw displacement, the screw axis is an invariant line, though if the pitch is non-zero, T has no fixed points.
In probability theory and ergodic theory, invariant sets are usually defined via the stronger property When the map is measurable, invariant sets form a sigma-algebra, the invariant sigma-algebra.
Formal statement
The notion of invariance is formalized in three different ways in mathematics: via group actions, presentations, and deformation.
Unchanged under group action
Firstly, if one has a group G acting on a mathematical object (or set of objects) X, then one may ask which points x are unchanged, "invariant" under the group action, or under an element g of the group.
Frequently one will have a group acting on a set X, which leaves one to determine which objects in an associated set F(X) are invariant. For example, rotation in the plane about a point leaves the point about which it rotates invariant, while translation in the plane does not leave any points invariant, but does leave all lines parallel to the direction of translation invariant as lines. Formally, define the set of lines in the plane P as L(P); then a rigid motion of the plane takes lines to lines – the group of rigid motions acts on the set of lines – and one may ask which lines are unchanged by an action.
More importantly, one may define a function on a set, such as "radius of a circle in the plane", and then ask if this function is invariant under a group action, such as rigid motions.
Dual to the notion of invariants are coinvariants, also known as orbits, which formalizes the notion of congruence: objects which can be taken to each other by a group action. For example, under the group of rigid motions of the plane, the perimeter of a triangle is an invariant, while the set of triangles congruent to a given triangle is a coinvariant.
These are connected as follows: invariants are constant on coinvariants (for example, congruent triangles have the same perimeter), while two objects which agree in the value of one invariant may or may not be congruent (for example, two triangles with the same perimeter need not be congruent). In classification problems, one might seek to find a complete set of invariants, such that if two objects have the same values for this set of invariants, then they are congruent.
For example, triangles such that all three sides are equal are congruent under rigid motions, via SSS congruence, and thus the lengths of all three sides form a complete set of invariants for triangles. The three angle measures of a triangle are also invariant under rigid motions, but do not form a complete set as incongruent triangles can share the same angle measures. However, if one allows scaling in addition to rigid motions, then the AAA similarity criterion shows that this is a complete set of invariants.
Independent of presentation
Secondly, a function may be defined in terms of some presentation or decomposition of a mathematical object; for instance, the Euler characteristic of a cell complex is defined as the alternating sum of the number of cells in each dimension. One may forget the cell complex structure and look only at the underlying topological space (the manifold) – as different cell complexes give the same underlying manifold, one may ask if the function is independent of choice of presentation, in which case it is an intrinsically defined invariant. This is the case for the Euler characteristic, and a general method for defining and computing invariants is to define them for a given presentation, and then show that they are independent of the choice of presentation. Note that there is no notion of a group action in this sense.
The most common examples are:
The presentation of a manifold in terms of coordinate charts – invariants must be unchanged under change of coordinates.
Various manifold decompositions, as discussed for Euler characteristic.
Invariants of a presentation of a group.
Unchanged under perturbation
Thirdly, if one is studying an object which varies in a family, as is common in algebraic geometry and differential geometry, one may ask if the property is unchanged under perturbation (for example, if an object is constant on families or invariant under change of metric).
Invariants in computer science
In computer science, an invariant is a logical assertion that is always held to be true during a certain phase of execution of a computer program. For example, a loop invariant is a condition that is true at the beginning and the end of every iteration of a loop.
Invariants are especially useful when reasoning about the correctness of a computer program. The theory of optimizing compilers, the methodology of design by contract, and formal methods for determining program correctness, all rely heavily on invariants.
Programmers often use assertions in their code to make invariants explicit. Some object oriented programming languages have a special syntax for specifying class invariants.
Automatic invariant detection in imperative programs
Abstract interpretation tools can compute simple invariants of given imperative computer programs. The kind of properties that can be found depend on the abstract domains used. Typical example properties are single integer variable ranges like 0<=x<1024, relations between several variables like 0<=i-j<2*n-1, and modulus information like y%4==0. Academic research prototypes also consider simple properties of pointer structures.
More sophisticated invariants generally have to be provided manually.
In particular, when verifying an imperative program using the Hoare calculus, a loop invariant has to be provided manually for each loop in the program, which is one of the reasons that this approach is generally impractical for most programs.
In the context of the above MU puzzle example, there is currently no general automated tool that can detect that a derivation from MI to MU is impossible using only the rules 1–4. However, once the abstraction from the string to the number of its "I"s has been made by hand, leading, for example, to the following C program, an abstract interpretation tool will be able to detect that ICount%3 cannot be 0, and hence the "while"-loop will never terminate.
void MUPuzzle(void) {
volatile int RandomRule;
int ICount = 1, UCount = 0;
while (ICount % 3 != 0) // non-terminating loop
switch(RandomRule) {
case 1: UCount += 1; break;
case 2: ICount *= 2; UCount *= 2; break;
case 3: ICount -= 3; UCount += 1; break;
case 4: UCount -= 2; break;
} // computed invariant: ICount % 3 == 1 || ICount % 3 == 2
}
| Mathematics | Geometry: General | null |
1126641 | https://en.wikipedia.org/wiki/Invariant%20%28physics%29 | Invariant (physics) | In theoretical physics, an invariant is an observable of a physical system which remains unchanged under some transformation. Invariance, as a broader term, also applies to the no change of form of physical laws under a transformation, and is closer in scope to the mathematical definition. Invariants of a system are deeply tied to the symmetries imposed by its environment.
Invariance is an important concept in modern theoretical physics, and many theories are expressed in terms of their symmetries and invariants.
Examples
In classical and quantum mechanics, invariance of space under translation results in momentum being an invariant and the conservation of momentum, whereas invariance of the origin of time, i.e. translation in time, results in energy being an invariant and the conservation of energy. In general, by Noether's theorem, any invariance of a physical system under a continuous symmetry leads to a fundamental conservation law.
In crystals, the electron density is periodic and invariant with respect to discrete translations by unit cell vectors. In very few materials, this symmetry can be broken due to enhanced electron correlations.
Another examples of physical invariants are the speed of light, and charge and mass of a particle observed from two reference frames moving with respect to one another (invariance under a spacetime Lorentz transformation), and invariance of time and acceleration under a Galilean transformation between two such frames moving at low velocities.
Quantities can be invariant under some common transformations but not under others. For example, the velocity of a particle is invariant when switching coordinate representations from rectangular to curvilinear coordinates, but is not invariant when transforming between frames of reference that are moving with respect to each other. Other quantities, like the speed of light, are always invariant.
Physical laws are said to be invariant under transformations when their predictions remain unchanged. This generally means that the form of the law (e.g. the type of differential equations used to describe the law) is unchanged in transformations so that no additional or different solutions are obtained.
Covariance and contravariance generalize the mathematical properties of invariance in tensor mathematics, and are frequently used in electromagnetism, special relativity, and general relativity.
Informal usage
In the field of physics, the adjective covariant (as in covariance and contravariance of vectors) is often used informally as a synonym for "invariant". For example, the Schrödinger equation does not keep its written form under the coordinate transformations of special relativity. Thus, a physicist might say that the Schrödinger equation is not covariant. In contrast, the Klein–Gordon equation and the Dirac equation do keep their written form under these coordinate transformations. Thus, a physicist might say that these equations are covariant.
Despite this usage of "covariant", it is more accurate to say that the Klein–Gordon and Dirac equations are invariant, and that the Schrödinger equation is not invariant. Additionally, to remove ambiguity, the transformation by which the invariance is evaluated should be indicated.
| Physical sciences | Quantum mechanics | Physics |
1127278 | https://en.wikipedia.org/wiki/Constantan | Constantan | Constantan, also known in various contexts as Eureka, Advance, and Ferry, refers to a copper-nickel alloy commonly used for its stable electrical resistance across a wide range of temperatures. It usually consists of 55% copper and 45% nickel. Its main feature is the low thermal variation of its resistivity, which is constant over a wide range of temperatures. Other alloys with similarly low temperature coefficients are known, such as manganin (Cu [86%] / Mn [12%] / Ni [2%] ).
History
In 1887, Edward Weston discovered that metals can have a negative temperature coefficient of resistance, inventing what he called his "Alloy No. 2."
It was produced in Germany where it was renamed "Konstantan".
Constantan alloy
Of all alloys used in modern strain gauges, constantan is the oldest, and still the most widely used. This situation reflects the fact that constantan has the best overall combination of properties needed for many strain gauge applications. This alloy has, for example, an adequately high strain sensitivity, or gauge factor, which is relatively insensitive to strain level and temperature. Its resistivity () is high enough to achieve suitable resistance values in even very small grids, and its temperature coefficient of resistance is fairly low. In addition, constantan is characterized by a good fatigue life and relatively high elongation capability. However, constantan tends to exhibit a continuous drift at temperatures above ; and this characteristic should be taken into account when zero stability of the strain gauge is critical over a period of hours or days. Constantan is also used for electrical resistance heating and thermocouples.
A-alloy
Very importantly, constantan can be processed for self-temperature compensation to match a wide range of test material coefficients of thermal expansion. A-alloy is supplied in self-temperature-compensation (S-T-C) numbers 00, 03, 05, 06, 09, 13, 15, 18, 30, 40, and 50, for use on test materials with corresponding thermal expansion coefficients, expressed in parts per million by length (or μm/m) per degrees Fahrenheit.
P alloy
For the measurement of very large strains, 5% (50,000 microstrain) or above, annealed constantan (P alloy) is the grid material normally selected. Constantan in this form is very ductile; and, in gauge lengths of and longer, can be strained to >20%. It should be borne in mind, however, that under high cyclic strains the P alloy will exhibit some permanent resistivity change with each cycle, and cause a corresponding zero shift in the strain gauge. Because of this characteristic and the tendency for premature grid failure with repeated straining, P alloy is not ordinarily recommended for cyclic strain applications. P alloy is available with S-T-C numbers of 08 and 40 for use on metals and plastics, respectively.
Physical properties
Temperature measurement
Constantan is also used to form thermocouples with wires made of iron, copper, or chromel. It has an extraordinarily strong negative Seebeck coefficient above 0 degrees Celsius, leading to a good temperature sensitivity.
| Physical sciences | Copper alloys | Chemistry |
4454920 | https://en.wikipedia.org/wiki/Coccinella%20septempunctata | Coccinella septempunctata | Coccinella septempunctata, the common ladybug, the seven-spot ladybird (or, in North America, seven-spotted ladybug or "C-7"), is a carnivorous beetle native to the Old World and is the most common ladybird in Europe. The beetle is also found in North America, Central and Eastern Asia and regions with a temperate climate. Its elytra are of a red colour, but each punctuated with three black spots, with one further spot being spread over the junction of the two, making a total of seven spots, from which the species derives both its common and scientific names (from the Latin = "seven" and = "spot").
Biology
Although C. septempunctata larvae and adults mainly eat aphids, they also feed on Thysanoptera, Aleyrodidae, on the larvae of Psyllidae and Cicadellidae, and on eggs and larvae of some beetles and butterflies. They breed one or two generations per year. Adults overwinter in ground litter in parks, gardens and forest edges and under tree bark and rocks.
C. septempunctata has a broad ecological range, generally living wherever there are aphids for it to eat. This includes, amongst other biotopes, meadows, fields, Pontic–Caspian steppe, parkland, gardens, Western European broadleaf forests and mixed forests.
In the United Kingdom, there are fears that the seven-spot ladybird is being outcompeted for food by the harlequin ladybird.
An adult seven-spot ladybird may reach a body length of . Their distinctive spots and conspicuous colours warn of their toxicity, making them unappealing to predators. The species can secrete a fluid from joints in their legs which gives them a foul taste. A threatened ladybird may both play dead and secrete the unappetising substance to protect itself. The seven-spot ladybird synthesizes the toxic alkaloids, N-oxide coccinelline and its free base precoccinelline; depending on sex and diet, the spot size and coloration can provide some indication of how toxic the individual insect is to potential predators.
Distribution
The species can be found in Europe, North Africa, Australia, Cyprus, European Russia, the Caucasus, Siberia, the Russian Far East, Belarus, Ukraine, Moldova, the Transcaucasia, Kazakhstan, Middle Asia, Western Asia, Middle East, Afghanistan, Mongolia, China, North and South Korea, Pakistan, Nepal, North India, Japan, Sri Lanka, southeast Asia, and tropical Africa.
Interaction with humans
Biological control, introductions, and infestations
The species has been repeatedly introduced to North America as a biological control agent to reduce aphid numbers. The first record of successful establishment (after numerous failed attempts to introduce the species) in the United States was in 1973. It has since spread by natural dispersion to New York and Connecticut and to Oklahoma, Georgia and Delaware by recolonization.
In North America, this species has outcompeted many native species, including other Coccinella. Massive swarms of C. septempunctata took place in the drought summer of 1976 in the UK. The species has undergone significant declines on the island of Malta, yet it is unclear whether this decline has occurred at the same rate elsewhere.
In culture
C. septempunctata has been designated the national insect of Finland. In the United States, it is also the official state insect of five different states (Delaware, Massachusetts, New Hampshire, Ohio, and Tennessee).
| Biology and health sciences | Beetles (Coleoptera) | Animals |
9988187 | https://en.wikipedia.org/wiki/Twitter | Twitter | Twitter, officially known as X since 2023, is a social networking service. It is one of the world's largest social media platforms and one of the most-visited websites. Users can share short text messages, images, and videos in short posts commonly known as "tweets" (officially "posts") and like other users' content. The platform also includes direct messaging, video and audio calling, bookmarks, lists, communities, a chatbot (Grok), job search, and Spaces, a social audio feature. Users can vote on context added by approved users using the Community | Technology | Social network and blogging | null |
18006737 | https://en.wikipedia.org/wiki/Gonorrhea | Gonorrhea | Gonorrhoea or gonorrhea, colloquially known as the clap, is a sexually transmitted infection (STI) caused by the bacterium Neisseria gonorrhoeae. Infection may involve the genitals, mouth, or rectum.
Gonorrhea is spread through sexual contact with an infected person, or from a mother to a child during birth. Infected males may experience pain or burning with urination, discharge from the penis, or testicular pain. Infected females may experience burning with urination, vaginal discharge, vaginal bleeding between periods, or pelvic pain. Complications in females include pelvic inflammatory disease and in males include inflammation of the epididymis. Many of those infected, however, have no symptoms. If untreated, gonorrhea can spread to joints or heart valves. Gonorrhea affects about 0.8% of women and 0.6% of men. An estimated 33 to 106 million new cases occur each year. In 2015, it caused about 700 deaths.
Diagnosis is by testing the urine, urethra in males, vagina or cervix in females. It can be diagnosed by testing a sample collected from the throat or rectum of individuals who have had oral or anal sex, respectively. Testing all women who are sexually active and less than 25 years of age each year as well as those with new sexual partners is recommended; the same recommendation applies in men who have sex with men (MSM).
Gonorrhea can be prevented with the use of condoms, having sex with only one person who is uninfected, and by not having sex. Treatment is usually with ceftriaxone by injection and azithromycin by mouth. Resistance has developed to many previously used antibiotics and higher doses of ceftriaxone are occasionally required.
Signs and symptoms
Gonorrhea infections of mucosal membranes can cause swelling, itching, pain, and the formation of pus. The time from exposure to symptoms is usually between two and 14 days, with most symptoms appearing between four and six days after infection, if they appear at all. Both men and women with infections of the throat may experience a sore throat, though such infection does not produce symptoms in 90% of cases. Other symptoms may include swollen lymph nodes around the neck. Either sex can become infected in the eyes or rectum if these tissues are exposed to the bacterium, which can lead to pain with bowel movements, rectal discharge, or constipation.
Women
Half of women with gonorrhea are asymptomatic but the other half experience vaginal discharge, lower abdominal pain, or pain with sexual intercourse associated with inflammation of the uterine cervix. Common medical complications of untreated gonorrhea in women include pelvic inflammatory disease which can cause scars to the fallopian tubes and result in later ectopic pregnancy among those women who become pregnant.
Men
Most infected men with symptoms have inflammation of the penile urethra associated with a burning sensation during urination and discharge from the penis. In men, discharge with or without burning occurs in half of all cases and is the most common symptom of the infection. This pain is caused by a narrowing and stiffening of the urethral lumen. The most common medical complication of gonorrhea in men is inflammation of the epididymis. Gonorrhea is also associated with increased risk of prostate cancer.
Infants
If not treated, gonococcal ophthalmia neonatorum will develop in 28% of infants born to women with gonorrhea.
Spread
If left untreated, gonorrhea can spread from the original site of infection and infect and damage the joints, skin, and other organs. Indications of this can include fever, skin rashes, sores, and joint pain and swelling. In advanced cases, gonorrhea may cause a general feeling of tiredness similar to other infections. It is also possible for an individual to have an allergic reaction to the bacteria, in which case any appearing symptoms will be greatly intensified. Very rarely it may settle in the heart, causing endocarditis, or in the spinal column, causing meningitis. Both are more likely among individuals with suppressed immune systems, however.
Cause
Gonorrhea is caused by the bacterium Neisseria gonorrhoeae. Previous infection does not confer immunity – a person who has been infected can become infected again by exposure to someone who is infected. Infected persons may be able to infect others repeatedly without having any signs or symptoms of their own.
Spread
The infection is usually spread from one person to another through vaginal, oral, or anal sex. Men have a 20% risk of getting the infection from a single act of vaginal intercourse with an infected woman. The risk for men who have sex with men (MSM) is higher. Insertive MSM may get a penile infection from anal intercourse, while receptive MSM may get anorectal gonorrhea. Women have a 60–80% risk of getting the infection from a single act of vaginal intercourse with an infected man.
A mother may transmit gonorrhea to her newborn during childbirth; when affecting the infant's eyes, it is referred to as ophthalmia neonatorum. It may be able to spread through the objects contaminated with body fluid from an infected person. The bacteria typically does not survive long outside the body, typically dying within minutes to hours.
Risk factors
It is discovered that sexually active women younger than 25 and men who have sex with men are at increased risk of getting gonorrhea.
Other risk factors include:
Having a new sex partner
Having a sex partner who has other partners
Having more than one sex partner
Having had gonorrhea or another sexually transmitted infection
Complications
Untreated gonorrhea can lead to major complications, such as:
Infertility in women. Gonorrhea can spread into the uterus and fallopian tubes, causing pelvic inflammatory disease (PID). PID can result in scarring of the tubes, greater risk of pregnancy complications and infertility, and can be fatal, particularly in the immunocompromised. PID requires immediate treatment.
Infertility in men. Gonorrhea can cause a small, coiled tube in the rear portion of the testicles where the sperm ducts are located (epididymis) to become inflamed (epididymitis). Untreated epididymitis can lead to infertility.
Infection that spreads to the joints and other areas of the body. The bacterium that causes gonorrhea can spread through the bloodstream and infect other parts of the body, including the joints. Fever, rash, skin sores, joint pain, swelling and stiffness are possible results.
Increased risk of HIV/AIDS. Having gonorrhea increases the susceptibility to infection with human immunodeficiency virus (HIV), the virus that leads to AIDS. People who have both gonorrhea and HIV (untreated by anti-retroviral therapy) are able to pass both diseases more readily to their partners.
Complications in babies. Babies who contract gonorrhea from their mothers during birth can develop blindness, sores on the scalp and infections.
Diagnosis
Traditionally, gonorrhea was diagnosed with Gram stain and culture; however, newer polymerase chain reaction (PCR)-based testing methods are becoming more common. If initial treatment fails, a culture should be done to determine the sensitivity of the bacteria to antibiotics.
Tests that use PCR (aka nucleic acid amplification) to identify genes unique to N. gonorrhoeae are recommended for screening and diagnosis of gonorrhea infection. These PCR-based tests require a sample of urine, urethral swabs, or cervical/vaginal swabs. Culture (growing colonies of bacteria in order to isolate and identify them) and Gram-stain (staining of bacterial cell walls to reveal morphology) can also be used to detect the presence of N. gonorrhoeae in all specimen types except urine. Studies of the swab sample method for gonorrhea infections have not shown any difference in the number of patients treated, whether the sample was collected at home or in the clinic. The implications for number of patients cured, reinfection rates, partner management, and safety are unknown.
If Gram-negative, oxidase-positive diplococci are visualized on direct Gram stain of urethral pus (male genital infection), no further testing is needed to establish the diagnosis of gonorrhea infection. However, direct Gram stain of cervical swabs is not useful because the N. gonorrhoeae organisms are less concentrated in these samples. The chance of a false positive test is also higher for a cervical swab, as Gram-negative diplococci native to the normal vaginal flora cannot be distinguished from N. gonorrhoeae in that context. Thus, cervical swabs must be cultured under the conditions described above. If oxidase positive, Gram-negative diplococci are isolated from a culture of a cervical/vaginal swab specimen, then the diagnosis is made. Culture is especially useful for diagnosis of infections of the throat, rectum, eyes, blood, or joints—areas where PCR-based tests are not well established in all labs. Culture is also useful for antimicrobial sensitivity testing, analyzing treatment failure, and epidemiological purposes (outbreaks, surveillance).
In patients who may have disseminated gonococcal infection (DGI), all possible mucosal sites should be cultured (e.g., pharynx, cervix, urethra, rectum). Three sets of blood cultures should also be obtained. Synovial fluid should be collected in cases of septic arthritis.
All people testing positive for gonorrhea should be tested for other sexually transmitted infections such as chlamydia, syphilis, and human immunodeficiency virus. Studies have found co-infection with chlamydia ranging from 46 to 54% in young people with gonorrhea. Among persons in the United States between 14 and 39 years of age, 46% of people with gonorrheal infection also have chlamydial infection. For this reason, gonorrhea and chlamydia testing are often combined. People diagnosed with gonorrhea infection have a fivefold increase risk of HIV transmission. Additionally, infected persons who are HIV positive are more likely to shed and transmit HIV to uninfected partners during an episode of gonorrhea.
Screening
The United States Preventive Services Task Force (USPSTF) recommends screening for gonorrhea in women at increased risk of infection, which includes all sexually active women younger than 25 years. Extragenital gonorrhea and chlamydia are highest in men who have sex with men (MSM). Additionally, the USPSTF also recommends routine screening in people who have previously tested positive for gonorrhea or have multiple sexual partners and individuals who use condoms inconsistently, provide sexual favors for money, or have sex while under the influence of alcohol or drugs.
Screening for gonorrhea in women who are (or intend to become) pregnant, and who are found to be at high risk for sexually transmitted infections, is recommended as part of prenatal care in the United States.
Prevention
As with most sexually transmitted infections, the risk of infection can be reduced significantly by the correct use of condoms, not having sex, or can be removed almost entirely by limiting sexual activities to a mutually monogamous relationship with an uninfected person.
Those previously infected are encouraged to return for follow up care to make sure that the infection has been eliminated. In addition to the use of phone contact, the use of email and text messaging have been found to improve the re-testing for infection.
Newborn babies coming through the birth canal are given erythromycin ointment in the eyes to prevent blindness from infection. The underlying gonorrhea should be treated; if this is done then usually a good prognosis will follow.
Treatment
Antibiotics
Antibiotics are used to treat gonorrhea infections. As of 2016, both ceftriaxone by injection and azithromycin by mouth are most effective. However, due to increasing rates of antibiotic resistance, local susceptibility patterns must be taken into account when deciding on treatment. Ertapenem is a potential effective alternative treatment for ceftriaxone-resistant gonorrhea.
Adults may have eyes infected with gonorrhoea and require proper personal hygiene and medications. Addition of topical antibiotics have not been shown to improve cure rates compared to oral antibiotics alone in treatment of eye infected gonorrhea. For newborns, erythromycin ointment is recommended as a preventative measure for gonococcal infant conjunctivitis.
Infections of the throat can be especially problematic, as antibiotics have difficulty becoming sufficiently concentrated there to destroy the bacteria. This is amplified by the fact that pharyngeal gonorrhoea is mostly asymptomatic, and gonococci and commensal Neisseria species can coexist for long time periods in the pharynx and share anti-microbial resistance genes. Accordingly, an enhanced focus on early detection (i.e., screening of high-risk populations, such as men who have sex with men, PCR testing should be considered) and appropriate treatment of pharyngeal gonorrhoea is important.
Sexual partners
It is recommended that sexual partners be tested and potentially treated. One option for treating sexual partners of people infected is patient-delivered partner therapy (PDPT), which involves providing prescriptions or medications to the person to take to his/her partner without the health care provider's first examining him/her.
The United States' Centers for Disease Control and Prevention (CDC) currently recommend that individuals who have been diagnosed and treated for gonorrhea avoid sexual contact with others until at least one week past the final day of treatment in order to prevent the spread of the bacterium.
Antibiotic resistance
Many antibiotics that were once effective including penicillin, tetracycline, and fluoroquinolones are no longer recommended because of high rates of resistance. Resistance to cefixime has reached a level such that it is no longer recommended as a first-line agent in the United States, and if it is used a person should be tested again after a week to determine whether the infection still persists. Public health officials are concerned that an emerging pattern of resistance may predict a global epidemic. In 2016, the WHO published new guidelines for treatment, stating "There is an urgent need to update treatment recommendations for gonococcal infections to respond to changing antimicrobial resistance (AMR) patterns of N. gonorrhoeae. High-level resistance to previously recommended quinolones is widespread and decreased susceptibility to the extended-spectrum (third-generation) cephalosporins, another recommended first-line treatment in the 2003 guidelines, is increasing and several countries have reported treatment failures."
Prognosis
Gonorrhea if left untreated may last for weeks or months with higher risks of complications. One of the complications of gonorrhea is systemic dissemination resulting in skin pustules or petechia, septic arthritis, meningitis, or endocarditis. This occurs in between 0.6 and 3% of infected women and 0.4 and 0.7% of infected men.
In men, inflammation of the epididymis, prostate gland, and urethra can result from untreated gonorrhea. In women, the most common result of untreated gonorrhea is pelvic inflammatory disease. Other complications include inflammation of the tissue surrounding the liver, a rare complication associated with Fitz-Hugh–Curtis syndrome; septic arthritis in the fingers, wrists, toes, and ankles; septic abortion; chorioamnionitis during pregnancy; neonatal or adult blindness from conjunctivitis; and infertility. Men who have had a gonorrhea infection have an increased risk of getting prostate cancer.
Epidemiology
About 88 million cases of gonorrhea occur each year, out of the 448 million new cases of curable STI each year – that also includes syphilis, chlamydia and trichomoniasis. The prevalence was highest in the African region, the Americas, and Western Pacific, and lowest in Europe. In 2013, it caused about 3,200 deaths, up from 2,300 in 1990.
In the United Kingdom, 196 per 100,000 males 20 to 24 years old and 133 per 100,000 females 16 to 19 years old were diagnosed in 2005. In 2013, the CDC estimated that more than 820,000 people in the United States get a new gonorrheal infection each year. Fewer than half of these infections are reported to CDC. In 2011, 321,849 cases of gonorrhea were reported to the CDC. After the implementation of a national gonorrhea control program in the mid-1970s, the national gonorrhea rate declined from 1975 to 1997. After a small increase in 1998, the gonorrhea rate has decreased slightly since 1999. In 2004, the rate of reported gonorrheal infections was 113. 5 per 100,000 persons.
In the US, it is the second-most-common bacterial sexually transmitted infections; chlamydia remains first. According to the CDC African Americans are most affected by gonorrhea, accounting for 69% of all gonorrhea cases in 2010.
The World Health Organization warned in 2017 of the spread of untreatable strains of gonorrhea, following analysis of at least three cases in Japan, France and Spain, which survived all antibiotic treatment.
History
Some scholars translate the biblical terms (for a male, ) and (for a female, ) as gonorrhea.
It has been suggested that mercury was used as a treatment for gonorrhea. Surgeons' tools on board the recovered English warship the Mary Rose included a syringe that, according to some, was used to inject the mercury via the urinary meatus into crewmen with gonorrhea. The name "the clap", in reference to the disease, is recorded as early as the sixteenth century, referring to a medieval red-light district in Paris, Les Clapiers. Translating to "The rabbit holes", it was so named for the small huts in which prostitutes worked.
Silver nitrate was one of the widely used drugs in the 19th century. However, it became replaced by Protargol. Arthur Eichengrün invented this type of colloidal silver, which was marketed by Bayer from 1897 onward. The silver-based treatment was used until the first antibiotics came into use in the 1940s.
The exact time of onset of gonorrhea as prevalent disease or epidemic cannot be accurately determined from the historical record. One of the first reliable notations occurs in the Acts of the English Parliament which, in 1161, passed a law to reduce the spread of "the perilous infirmity of burning". The symptoms described are consistent with, but not diagnostic of, gonorrhea. A similar decree was passed by Louis IX in France in 1256, replacing regulation with banishment. Similar symptoms were noted at the siege of Acre by Crusaders.
Coincidental to, or dependent on, the appearance of a gonorrhea epidemic, several changes occurred in European medieval society. Cities hired public health doctors to treat affected patients without right of refusal. Pope Boniface VIII rescinded the requirement that physicians complete studies for the lower orders of the Catholic priesthood.
Medieval public health physicians in the employ of their cities were required to treat prostitutes infected with the "burning", as well as lepers and other epidemic patients. After Pope Boniface completely secularized the practice of medicine, physicians were more willing to treat a sexually transmitted infection.
Research
A vaccine for gonorrhea has been developed that is effective in mice. It will not be available for human use until further studies have demonstrated that it is both safe and effective in the human population. Development of a vaccine has been complicated by the ongoing evolution of resistant strains and antigenic variation (the ability of N. gonorrhoeae to disguise itself with different surface markers to evade the immune system).
As N. gonorrhoeae is closely related to N. meningitidis and they have 80–90% homology in their genetic sequences some cross-protection by meningococcal vaccines is plausible. A study published in 2017 showed that MeNZB group B meningococcal vaccine provided a partial protection against gonorrhea. The vaccine efficiency was calculated to be 31%. In June 2023, GlaxoSmithKline won fast-track designation from the Food and Drug Administration for its vaccine candidate against gonorrhea.
| Biology and health sciences | Infectious disease | null |
18008163 | https://en.wikipedia.org/wiki/Groundwater%20remediation | Groundwater remediation | Groundwater remediation is the process that is used to treat polluted groundwater by removing the pollutants or converting them into harmless products. Groundwater is water present below the ground surface that saturates the pore space in the subsurface. Globally, between 25 per cent and 40 per cent of the world's drinking water is drawn from boreholes and dug wells. Groundwater is also used by farmers to irrigate crops and by industries to produce everyday goods. Most groundwater is clean, but groundwater can become polluted, or contaminated as a result of human activities or as a result of natural conditions.
The many and diverse activities of humans produce innumerable waste materials and by-products. Historically, the disposal of such waste have not been subject to many regulatory controls. Consequently, waste materials have often been disposed of or stored on land surfaces where they percolate into the underlying groundwater. As a result, the contaminated groundwater is unsuitable for use.
Current practices can still impact groundwater, such as the over application of fertilizer or pesticides, spills from industrial operations, infiltration from urban runoff, and leaking from landfills. Using contaminated groundwater causes hazards to public health through poisoning or the spread of disease, and the practice of groundwater remediation has been developed to address these issues. Contaminants found in groundwater cover a broad range of physical, inorganic chemical, organic chemical, bacteriological, and radioactive parameters. Pollutants and contaminants can be removed from groundwater by applying various techniques, thereby bringing the water to a standard that is commensurate with various intended uses.
Techniques
Ground water remediation techniques span biological, chemical, and physical treatment technologies. Most ground water treatment techniques utilize a combination of technologies. Some of the biological treatment techniques include bioaugmentation, bioventing, biosparging, bioslurping, and phytoremediation. Some chemical treatment techniques include ozone and oxygen gas injection, chemical precipitation, membrane separation, ion exchange, carbon absorption, aqueous chemical oxidation, and surfactant enhanced recovery. Some chemical techniques may be implemented using nanomaterials. Physical treatment techniques include, but are not limited to, pump and treat, air sparging, and dual phase extraction.
Biological treatment technologies
Bioaugmentation
If a treatability study shows no degradation (or an extended lab period before significant degradation is achieved) in contamination contained in the groundwater, then inoculation with strains known to be capable of degrading the contaminants may be helpful. This process increases the reactive enzyme concentration within the bioremediation system and subsequently may increase contaminant degradation rates over the nonaugmented rates, at least initially after inoculation.
Bioventing
Bioventing is an on site remediation technology that uses microorganisms to biodegrade organic constituents in the groundwater system. Bioventing enhances the activity of indigenous bacteria and archaea and stimulates the natural in situ biodegradation of hydrocarbons by inducing air or oxygen flow into the unsaturated zone and, if necessary, by adding nutrients. During bioventing, oxygen may be supplied through direct air injection into residual contamination in soil. Bioventing primarily assists in the degradation of adsorbed fuel residuals, but also assists in the degradation of volatile organic compounds (VOCs) as vapors move slowly through biologically active soil.
Biosparging
Biosparging is an in situ remediation technology that uses indigenous microorganisms to biodegrade organic constituents in the saturated zone. In biosparging, air (or oxygen) and nutrients (if needed) are injected into the saturated zone to increase the biological activity of the indigenous microorganisms. Biosparging can be used to reduce concentrations of petroleum constituents that are dissolved in groundwater, adsorbed to soil below the water table, and within the capillary fringe.
Bioslurping
Bioslurping combines elements of bioventing and vacuum-enhanced pumping of free-product that is lighter than water (light non-aqueous phase liquid or LNAPL) to recover free-product from the groundwater and soil, and to bioremediate soils. The bioslurper system uses a “slurp” tube that extends into the free-product layer. Much like a straw in a glass draws liquid, the pump draws liquid (including free-product) and soil gas up the tube in the same process stream. Pumping lifts LNAPLs, such as oil, off the top of the water table and from the capillary fringe (i.e., an area just above the saturated zone, where water is held in place by capillary forces). The LNAPL is brought to the surface, where it is separated from water and air. The biological processes in the term “bioslurping” refer to aerobic biological degradation of the hydrocarbons when air is introduced into the unsaturated zone contaminated soil.
Phytoremediation
In the phytoremediation process certain plants and trees are planted, whose roots absorb contaminants from ground water over time. This process can be carried out in areas where the roots can tap the ground water. Few examples of plants that are used in this process are Chinese Ladder fern Pteris vittata, also known as the brake fern, is a highly efficient accumulator of arsenic. Genetically altered cottonwood trees are good absorbers of mercury and transgenic Indian mustard plants soak up selenium well.
Permeable reactive barriers
Certain types of permeable reactive barriers utilize biological organisms in order to remediate groundwater.
Chemical treatment technologies
Chemical precipitation
Chemical precipitation is commonly used in wastewater treatment to remove hardness and heavy metals. In general, the process involves addition of agent to an aqueous waste stream in a stirred reaction vessel, either batchwise or with steady flow. Most metals can be converted to insoluble compounds by chemical reactions between the agent and the dissolved metal ions. The insoluble compounds (precipitates) are removed by settling and/or filtering.
Ion exchange
Ion exchange for ground water remediation is virtually always carried out by passing the water downward under pressure through a fixed bed of granular medium (either cation exchange media and anion exchange media) or spherical beads. Cations are displaced by certain cations from the solutions and ions are displaced by certain anions from the solution. Ion exchange media most often used for remediation are zeolites (both natural and synthetic) and synthetic resins.
Carbon adsorption
The most common activated carbon used for remediation is derived from bituminous coal. Activated carbon adsorbs volatile organic compounds from ground water; the compounds attach to the graphite-like surface of the activated carbon.
Chemical oxidation
In this process, called In Situ Chemical Oxidation or ISCO, chemical oxidants are delivered in the subsurface to destroy (converted to water and carbon dioxide or to nontoxic substances) the organics molecules. The oxidants are introduced as either liquids or gasses. Oxidants include air or oxygen, ozone, and certain liquid chemicals such as hydrogen peroxide, permanganate and persulfate.
Ozone and oxygen gas can be generated on site from air and electricity and directly injected into soil and groundwater contamination. The process has the potential to oxidize and/or enhance naturally occurring aerobic degradation. Chemical oxidation has proven to be an effective technique for dense non-aqueous phase liquid or DNAPL when it is present.
Surfactant enhanced recovery
Surfactant enhanced recovery increases the mobility and solubility of the contaminants absorbed to the saturated soil matrix or present as dense non-aqueous phase liquid. Surfactant-enhanced recovery injects surfactants (surface-active agents that are primary ingredient in soap and detergent) into contaminated groundwater. A typical system uses an extraction pump to remove groundwater downstream from the injection point. The extracted groundwater is treated aboveground to separate the injected surfactants from the contaminants and groundwater. Once the surfactants have separated from the groundwater they are re-used. The surfactants used are non-toxic, food-grade, and biodegradable. Surfactant enhanced recovery is used most often when the groundwater is contaminated by dense non-aqueous phase liquids (DNAPLs). These dense compounds, such as trichloroethylene (TCE), sink in groundwater because they have a higher density than water. They then act as a continuous source for contaminant plumes that can stretch for miles within an aquifer. These compounds may biodegrade very slowly. They are commonly found in the vicinity of the original spill or leak where capillary forces have trapped them.
Permeable reactive barriers
Some permeable reactive barriers utilize chemical processes to achieve groundwater remediation.
Physical treatment technologies
Pump and treat
Pump and treat is one of the most widely used ground water remediation technologies. In this process ground water is pumped to the surface and is coupled with either biological or chemical treatments to remove the impurities.
Air sparging
Air sparging is the process of blowing air directly into the ground water. As the bubbles rise, the contaminants are removed from the groundwater by physical contact with the air (i.e., stripping) and are carried up into the unsaturated zone (i.e., soil). As the contaminants move into the soil, a soil vapor extraction system is usually used to remove vapors.
Dual phase vacuum extraction
Dual-phase vacuum extraction (DPVE), also known as multi-phase extraction, is a technology that uses a high-vacuum system to remove both contaminated groundwater and soil vapor. In DPVE systems, a high-vacuum extraction well is installed with its screened section in the zone of contaminated soils and groundwater. Fluid/vapor extraction systems depress the water table and water flows faster to the extraction well. DPVE removes contaminants from above and below the water table. As the water table around the well is lowered from pumping, unsaturated soil is exposed. This area, called the capillary fringe, is often highly contaminated, as it holds undissolved chemicals, chemicals that are lighter than water, and vapors that have escaped from the dissolved groundwater below. Contaminants in the newly exposed zone can be removed by vapor extraction. Once above ground, the extracted vapors and liquid-phase organics and groundwater are separated and treated. Use of dual-phase vacuum extraction with these technologies can shorten the cleanup time at a site, because the capillary fringe is often the most contaminated area.
Monitoring-well oil skimming
Monitoring-wells are often drilled for the purpose of collecting ground water samples for analysis. These wells, which are usually six inches or less in diameter, can also be used to remove hydrocarbons from the contaminant plume within a groundwater aquifer by using a belt-style oil skimmer. Belt oil skimmers, which are simple in design, are commonly used to remove oil and other floating hydrocarbon contaminants from industrial water systems.
A monitoring-well oil skimmer remediates various oils, ranging from light fuel oils such as petrol, light diesel or kerosene to heavy products such as No. 6 oil, creosote and coal tar. It consists of a continuously moving belt that runs on a pulley system driven by an electric motor. The belt material has a strong affinity for hydrocarbon liquids and for shedding water. The belt, which can have a vertical drop of 100+ feet, is lowered into the monitoring well past the LNAPL/water interface. As the belt moves through this interface, it picks up liquid hydrocarbon contaminant which is removed and collected at ground level as the belt passes through a wiper mechanism. To the extent that DNAPL hydrocarbons settle at the bottom of a monitoring well, and the lower pulley of the belt skimmer reaches them, these contaminants can also be removed by a monitoring-well oil skimmer.
Typically, belt skimmers remove very little water with the contaminant, so simple weir-type separators can be used to collect any remaining hydrocarbon liquid, which often makes the water suitable for its return to the aquifer. Because the small electric motor uses little electricity, it can be powered from solar panels or a wind turbine, making the system self-sufficient and eliminating the cost of running electricity to a remote location.
| Technology | Environmental remediation | null |
403300 | https://en.wikipedia.org/wiki/Canis | Canis | Canis is a genus of the Caninae which includes multiple extant species, such as wolves, dogs, coyotes, and golden jackals. Species of this genus are distinguished by their moderate to large size, their massive, well-developed skulls and dentition, long legs, and comparatively short ears and tails.
Taxonomy
The genus Canis (Carl Linnaeus, 1758) was published in the 10th edition of Systema Naturae and included the dog-like carnivores: the domestic dog, wolves, coyotes and jackals. All species within Canis are phylogenetically closely related with 78 chromosomes and can potentially interbreed. In 1926, the International Commission on Zoological Nomenclature (ICZN) in Opinion 91 included Genus Canis on its Official Lists and Indexes of Names in Zoology. In 1955, the ICZN's Direction 22 added Canis familiaris as the type species for genus Canis to the official list.
The cladogram below is based on the DNA phylogeny of Lindblad-Toh et al. (2005), modified to incorporate recent findings on Canis species,
In 2019, a workshop hosted by the IUCN/SSC Canid Specialist Group recommends that because DNA evidence shows the side-striped jackal (Canis adustus) and black-backed jackal (Canis mesomelas) to form a monophyletic lineage that sits outside of the Canis/Cuon/Lycaon clade, that they should be placed in a distinct genus, Lupulella Hilzheimer, 1906 with the names Lupulella adusta and Lupulella mesomelas.
Evolution
See further: Evolution of the canids
The fossil record shows that feliforms and caniforms emerged within the clade Carnivoramorpha 43 million YBP. The caniforms included the fox-like genus Leptocyon, whose various species existed from 24 million YBP before branching 11.9 million YBP into Vulpes (foxes) and Canini (canines). The jackal-sized Eucyon existed in North America from 10 million YBP and by the Early Pliocene about 6-5 million YBP the coyote-like Eucyon davisi invaded Eurasia. The canids that had emigrated from North America to Eurasia – Eucyon, Vulpes, and Nyctereutes – were small to medium-sized predators during the Late Miocene and Early Pliocene but they were not the top predators.
For Canis populations in the New World, Eucyon in North America gave rise to early North American Canis which first appeared in the Miocene (6 million YBP) in south-western United States and Mexico. By 5 million YBP the larger Canis lepophagus, ancestor of wolves and coyotes, appeared in the same region.
Around 5 million years ago, some of the Old World Eucyon evolved into the first members of Canis, and the position of the canids would change to become a dominant predator across the Palearctic. The wolf-sized C. chihliensis appeared in northern China in the Mid-Pliocene around 4-3 million YBP. This was followed by an explosion of Canis evolution across Eurasia in the Early Pleistocene around 1.8 million YBP in what is commonly referred to as the wolf event. It is associated with the formation of the mammoth steppe and continental glaciation. Canis spread to Europe in the forms of C. arnensis, C. etruscus, and C. falconeri.
However, a 2021 genetic study of the dire wolf (Aenocyon dirus), previously considered a member of Canis, found that it represented the last member of an ancient lineage of canines originally indigenous to the New World that had diverged prior to the appearance of Canis, and that its lineage had been distinct since the Miocene with no evidence of introgression with Canis. The study hypothesized that the Neogene canids in the New World, Canis armbrusteri and Canis edwardii, were possibly members of the distinct dire wolf lineage that had convergently evolved a very similar appearance to members of Canis. True members of Canis, namely the gray wolf and coyote, likely only arrived in the New World during the Late Pleistocene, where their dietary flexibility and/or ability to hybridize with other canids allowed them to survive the Quaternary extinction event, unlike the dire wolf.
Xenocyon (strange wolf) is an extinct subgenus of Canis. The diversity of the Canis group decreased by the end of the Early Pleistocene to the Middle Pleistocene and was limited in Eurasia to the small wolves of the Canis mosbachensis–Canis variabilis group and the large hypercarnivorous Canis (Xenocyon) lycaonoides. The hypercarnivore Xenocyon gave rise to the modern dhole and the African wild dog.
Dentition and biteforce
Dentition relates to the arrangement of teeth in the mouth, with the dental notation for the upper-jaw teeth using the upper-case letters I to denote incisors, C for canines, P for premolars, and M for molars, and the lower-case letters i, c, p and m to denote the mandible teeth. Teeth are numbered using one side of the mouth and from the front of the mouth to the back. In carnivores, the upper premolar P4 and the lower molar m1 form the carnassials that are used together in a scissor-like action to shear the muscle and tendon of prey.
Canids use their premolars for cutting and crushing except for the upper fourth premolar P4 (the upper carnassial) that is only used for cutting. They use their molars for grinding except for the lower first molar m1 (the lower carnassial) that has evolved for both cutting and grinding depending on the candid's dietary adaptation. On the lower carnassial the trigonid is used for slicing and the talonid is used for grinding. The ratio between the trigonid and the talonid indicates a carnivore's dietary habits, with a larger trigonid indicating a hypercarnivore and a larger talonid indicating a more omnivorous diet. Because of its low variability, the length of the lower carnassial is used to provide an estimate of a carnivore's body size.
A study of the estimated bite force at the canine teeth of a large sample of living and fossil mammalian predators, when adjusted for their body mass, found that for placental mammals the bite force at the canines (in Newtons/kilogram of body weight) was greatest in the extinct dire wolf (163), followed among the modern canids by the four hypercarnivores that often prey on animals larger than themselves: the African hunting dog (142), the gray wolf (136), the dhole (112), and the dingo (108). The bite force at the carnassials showed a similar trend to the canines. A predator's largest prey size is strongly influenced by its biomechanical limits.
Behavior
Description and sexual dimorphism
There is little variance among male and female canids. Canids tend to live as monogamous pairs. Wolves, dholes, coyotes, and jackals live in groups that include breeding pairs and their offspring. Wolves may live in extended family groups. To take prey larger than themselves, the African wild dog, the dhole, and the gray wolf depend on their jaws as they cannot use their forelimbs to grapple with prey. They work together as a pack consisting of an alpha pair and their offspring from the current and previous years. Social mammal predators prey on herbivores with a body mass similar to that of the combined mass of the predator pack. The gray wolf specializes in preying on the vulnerable individuals of large prey, and a pack of timber wolves can bring down a moose.
Mating behaviour
The genus Canis contains many different species and has a wide range of different mating systems that varies depending on the type of canine and the species. In a study done in 2017, it was found that in some species of canids females use their sexual status to gain food resources. The study looked at wolves and dogs. Wolves are typically monogamous and form pair-bonds; whereas dogs are promiscuous when free-range and mate with multiple individuals. The study found that in both species females tried to gain access to food more and were more successful in monopolizing a food resource when in heat. Outside of the breeding season their efforts were not as persistent or successful. This shows that the food-for-sex hypothesis likely plays a role in the food sharing among canids and acts as a direct benefit for the females.
Another study on free-ranging dogs found that social factors played a significant role in the determination of mating pairs. The study, done in 2014, looked at social regulation of reproduction in the dogs. They found that females in heat searched out dominant males and were more likely to mate with a dominant male who appeared to be a quality leader. The females were more likely to reject submissive males. Furthermore, cases of male-male competition were more aggressive in the presence of high ranking females. This suggests that females prefer dominant males and males prefer high ranking females meaning social cues and status play a large role in the determination of mating pairs in dogs.
Canids also show a wide range of parental care and in 2018 a study showed that sexual conflict plays a role in the determination of intersexual parental investment. The studied looked at coyote mating pairs and found that paternal investment was increased to match or near match the maternal investment. The amount of parental care provided by the fathers also was shown to fluctuated depending on the level of care provided by the mother.
Another study on parental investment showed that in free-ranging dogs, mothers modify their energy and time investment into their pups as they age. Due to the high mortality of free-range dogs at a young age a mother's fitness can be drastically reduced. This study found that as the pups aged the mother shifted from high-energy care to lower-energy care so that they can care for their offspring for a longer duration for a reduced energy requirement. By doing this the mothers increasing the likelihood of their pups surviving infancy and reaching adulthood and thereby increase their own fitness.
A study done in 2017 found that aggression between male and female gray wolves varied and changed with age. Males were more likely to chase away rival packs and lone individuals than females and became increasingly aggressive with age. Alternatively, females were found to be less aggressive and constant in their level of aggression throughout their life. This requires further research but suggests that intersexual aggression levels in gray wolves relates to their mating system.
Tooth breakage
Tooth breakage is a frequent result of carnivores' feeding behaviour. Carnivores include both pack hunters and solitary hunters. The solitary hunter depends on a powerful bite at the canine teeth to subdue their prey, and thus exhibits a strong mandibular symphysis. In contrast, a pack hunter, which delivers many shallower bites, has a comparably weaker mandibular symphysis. Thus, researchers can use the strength of the mandibular symphysis in fossil carnivore specimens to determine what kind of hunter it wasa pack hunter or a solitary hunterand even how it consumed its prey. The mandibles of canids are buttressed behind the carnassial teeth to crack bones with their post-carnassial teeth (molars M2 and M3). A study found that the modern gray wolf and the red wolf (C.rufus) possess greater buttressing than all other extant canids and the extinct dire wolf. This indicates that these are both better adapted for cracking bone than other canids.
A study of nine modern carnivores indicate that one in four adults had suffered tooth breakage and that half of these breakages were of the canine teeth. The highest frequency of breakage occurred in the spotted hyena, which is known to consume all of its prey including the bone. The least breakage occurred in the African wild dog. The gray wolf ranked between these two. The eating of bone increases the risk of accidental fracture due to the relatively high, unpredictable stresses that it creates. The most commonly broken teeth are the canines, followed by the premolars, carnassial molars, and incisors. Canines are the teeth most likely to break because of their shape and function, which subjects them to bending stresses that are unpredictable in direction and magnitude. The risk of tooth fracture is also higher when taking and consuming large prey.
In comparison to extant gray wolves, the extinct Beringian wolves included many more individuals with moderately to heavily worn teeth and with a significantly greater number of broken teeth. The frequencies of fracture ranged from a minimum of 2% found in the Northern Rocky Mountain wolf (Canis lupus irremotus) up to a maximum of 11% found in Beringian wolves. The distribution of fractures across the tooth row also differs, with Beringian wolves having much higher frequencies of fracture for incisors, carnassials, and molars. A similar pattern was observed in spotted hyenas, suggesting that increased incisor and carnassial fracture reflects habitual bone consumption because bones are gnawed with the incisors and then cracked with the carnassials and molars.
Coyotes, jackals, and wolves
The gray wolf (C. lupus), the Ethiopian wolf (C. simensis), eastern wolf (C. lycaon), and the African golden wolf (C. lupaster) are four of the many Canis species referred to as "wolves". Species that are too small to attract the word "wolf" are called coyotes in the Americas and jackals elsewhere. Although these may not be more closely related to each other than they are to C. lupus, they are, as fellow Canis species, more closely related to wolves and domestic dogs than they are to foxes, maned wolves, or other canids which do not belong to the genus Canis. The word "jackal" is applied to the golden jackal (C. aureus), found across southwestern and south-central Asia, and the Balkans in Europe.
African migration
The first record of Canis on the African continent is Canis sp. A from South Turkwel, Kenya, dated 3.58–3.2 million years ago. In 2015, a study of mitochondrial genome sequences and whole genome nuclear sequences of African and Eurasian canids indicated that extant wolf-like canids have colonised Africa from Eurasia at least 5 times throughout the Pliocene and Pleistocene, which is consistent with fossil evidence suggesting that much of the African canid fauna diversity resulted from the immigration of Eurasian ancestors, likely coincident with Plio-Pleistocene climatic oscillations between arid and humid conditions. In 2017, the fossil remains of a new Canis species, named Canis othmanii, was discovered among remains found at Wadi Sarrat, Tunisia, from deposits that date 700,000 years ago. This canine shows a morphology more closely associated with canids from Eurasia instead of Africa.
Gallery
| Biology and health sciences | Canines | Animals |
403388 | https://en.wikipedia.org/wiki/Seikan%20Tunnel | Seikan Tunnel | The Seikan Tunnel (, or , ) is a dual-gauge railway tunnel in Japan, with a portion under the seabed of the Tsugaru Strait, which separates Aomori Prefecture on the main Japanese island of Honshu from the northern island of Hokkaido. The track level is about below the seabed and below sea level. The tunnel is part of the standard-gauge Hokkaido Shinkansen and the narrow-gauge Kaikyō Line of the Hokkaido Railway Company (JR Hokkaido)'s Tsugaru-Kaikyō Line. The name Seikan comes from combining the on'yomi readings of the first characters of , the nearest major city on the Honshu side of the strait, and , the nearest major city on the Hokkaido side.
The Seikan Tunnel is the world's longest undersea tunnel by overall length (the Channel Tunnel, while shorter, has a longer undersea segment). It is also the second-deepest transport tunnel below sea level after the Ryfylke Tunnel, a road tunnel in Norway that opened in 2019, and the second longest main-line railway tunnel after the Gotthard Base Tunnel in Switzerland, opened in 2016.
Overview
The tunnel was constructed using conventional construction methods, including tunnel boring machine (TBM) and New Austrian tunneling method (NATM). The construction cost of the tunnel itself was 538.4 billion yen at the planning stage, but it actually cost 745.5 billion yen. The construction cost of the strait line, including the attachment line, was 689 billion yen at the planning stage, but ended up costing 900 billion yen. The number of fatalities in the construction was 34.
Unlike the start of construction during the heyday of the Seikan route, even in eastern Japan, passenger traffic to Hokkaido was already dominated by aircraft, and the construction of the Hokkaido Shinkansen was frozen when it was completed. On the freight side, due to the deterioration of labor-management relations at the JNR at the time, including the frequent strikes and legal compliance struggles, freight transportation continued to stagnate as it lost market share to ferries and coastal shipping. In addition, since the maintenance cost is large, such as the need to pump a large amount of spring water even after completion, even a huge investment is regarded as a sunk cost, and it is said that it is more economical to abandon it, and it was ridiculed variously as "Showa's Three Idiots Assessment", "useless long things", and "quagmire tunnel".
However, after its opening, it has played an important role in freight transportation by JR Freight between Hokkaido and Honshu, and has made 21 round trips (regular trains) a day. Including special trains, there are about 50 freight trains up and down. The effect of being able to achieve stable and safe transportation that is not affected by the weather has been significant, and in particular, the transportation volume of agricultural products, which are a key industry in Hokkaido, has increased dramatically.
History
The idea to connect the islands of Honshu and Hokkaido by a fixed link was proposed by the Imperial Japanese Army in the late 1920s for strategic reasons and was part of the army's idea of linking the Japanese main islands with Japanese-held Korea and Sakhalin Islands, the latter then being divided with Japan and the Soviet Union.
The tunnel plan was handed over to the Japanese National Railway in 1946 with preliminary geological surveys and feasibility studies, induced by the loss of overseas territory at the end of World War II and the need to accommodate returnees. In 1954, five ferries, including the Tōya Maru, sank in the Tsugaru Strait during a typhoon, killing 1,430 passengers. The following year, Japanese National Railways (JNR) expedited the tunnel feasibility study. Also of concern was the increasing traffic between the two islands. A booming economy saw traffic levels on the JNR-operated Seikan Ferry double to 4,040,000 passengers/year from 1955 to 1965, and cargo levels rose 1.7 times to 6,240,000 tonnes/year.
Excavation work began in 1964. In September 1971, the decision was made to commence work on the tunnel. Drilling began in 1972 from both sides Hamana on the northern tip of Honshu and Yunosato in Hokkaido. To avoid danger from earthquakes, the tunnel goes through dense volcanic rock. A Shinkansen-capable cross section was selected, with plans to extend the Shinkansen network. Arduous construction in difficult geological conditions proceeded despite multiple challenges including drilling difficulties, tunnel floodings and the 1973 oil crisis which caused the completion of the tunnel to be delayed. Thirty-four workers were killed during construction.
The necessity for the project was questioned at times during construction, as the 1971 traffic predictions were overestimates. Instead of the traffic rate increasing as predicted to a peak in 1985, it peaked earlier in 1978 and then proceeded to decrease. The decrease was attributed to the slowdown in Japan's economy since the first oil crisis in 1973 and to advances made in air transport facilities and longer-range sea transport.
By mid-1982, the tunnel had only 1000 meters to complete.
On 27 January 1983, Japanese Prime Minister Yasuhiro Nakasone pressed a switch that set off a blast that completed the pilot tunnel. Similarly on 10 March 1985, Minister of Transport Tokuo Yamashita symbolically bored through the main tunnel by detonating a dynamite charge on the last few meters of the earth.
The tunnel was opened on 13 March 1988, having cost a total of ¥1.1 trillion (US$7 billion) to construct, almost 12 times the original budget, much of which was due to inflation over the years. To commemorate the event, a commemorative 500 yen coin depicting the tunnel was issued by the Japan Mint in 1988. Once the tunnel was completed, all railway transport between Honshu and Hokkaido used it, particularly conventional express trains. Similarly, the commuter ferry service between the two islands run by Japanese National Railway had also ended. However, for passenger transport, 90% of people use air travel due to the speed and cost. For example, to travel between Tokyo and Sapporo by train takes eight hours (Tokyo station and Shin-Sapporo station), with transfer from Shinkansen to narrow-gauge express train at Hakodate. By air, the journey is 1 hour and 45 minutes, or 3 hours and 30 minutes including airport access times. Deregulation and competition in Japanese domestic air travel has brought down prices on the Tokyo-Sapporo route, making rail more expensive in comparison.
The Hokutosei overnight train service began after the completion of the Seikan Tunnel; a later and more luxurious Cassiopeia overnight train service was often fully booked. Both were withdrawn following the commencement of Hokkaido Shinkansen services (in August 2015 and March 2016 respectively), with freight trains being the only regular service utilising the narrow-gauge line since that time. JR Hokkaido is exploring the use of "Train on Train" technology to remove the threat that the shock wave created in front of Shinkansen trains traveling at full speed poses to freight trains operating on Japanese standard narrow-gauge track in a tunnel setting. If successful, it will allow the Hokkaido Shinkansen to travel at full speed inside the tunnel in the future.
As of March 2019, Shinkansen trains operate through the tunnel to Shin-Hakodate-Hokuto Station in Hakodate, connecting Tokyo and Shin-Hakodate-Hokuto stations in 3 hours and 58 minutes. Their maximum speed is within the tunnel, outside it, and to the south of Morioka. It was expected that by 2018 one daily service will be run at through the tunnel. The final stage is proposed to open to Sapporo Station in 2031 and is expected to shorten the Tokyo-Sapporo rail journey to five hours. The Hokkaido Shinkansen will be operated by JR Hokkaido.
Construction timeline
24 April 1946: Geological surveying begins.
26 September 1954: The train ferry Tōya Maru sinks in the Tsugaru Strait.
23 March 1964: Japan Railway Construction Public Corporation is established.
28 September 1971: Construction on the main tunnel begins.
27 January 1983: Pilot tunnel breakthrough.
10 March 1985: Main tunnel breakthrough.
13 March 1988: The tunnel opens.
26 March 2016: Shinkansen services commence operation through the tunnel, regular narrow gauge passenger services through the tunnel cease.
Surveying, construction and geology
Surveying started in 1946 and construction began in 1971. By August 1982, less than 700 metres of the tunnel remained to be excavated. First contact between the two sides was in 1983. The Tsugaru Strait has eastern and western necks, both approximately across. Initial surveys undertaken in 1946 indicated that the eastern neck was up to deep with volcanic geology. The western neck had a maximum depth of and geology consisting mostly of sedimentary rocks of the Neogene period. The western neck was selected, with its conditions considered favourable for tunnelling.
The geology of the undersea portion of the tunnel consists of volcanic rock, pyroclastic rock, as well as sedimentary rock of the Neogene period. The area is folded into a nearly vertical syncline, which means that the youngest rock is in the centre of the strait and encountered last. Divided roughly into thirds, the Honshū side consists of volcanic rocks (notably andesite and basalt); the Hokkaido side consists of sedimentary rocks (notably Tertiary period tuff and mudstone); and the centre portion consists of Kuromatsunai strata (Tertiary period sand-like mudstone). Igneous intrusions and faults caused crushing of the rock and complicated the tunnelling procedures.
Initial geological investigation occurred from 1946 to 1963, which involved drilling the sea-bed, sonic surveys, submarine boring, observations using a mini-submarine, as well as seismic and magnetic surveys. To establish a greater understanding, a horizontal pilot boring was undertaken along the line of the service and main tunnels. Tunnelling occurred simultaneously from the northern end and the southern. The dry land portions were tackled with traditional mountain tunnelling techniques, with a single main tunnel. However, for the undersea portion, three bores were excavated with increasing diameters respectively: an initial pilot tunnel, a service tunnel, and finally the main tunnel. The service tunnel was periodically connected to the main tunnel with a series of connecting drifts, at intervals. The pilot tunnel serves as the service tunnel for the central five-kilometre portion. Beneath the Tsugaru Strait, the use of a tunnel boring machine (TBM) was abandoned after less than two kilometres () owing to the variable nature of the rock and difficulty in accessing the face for advanced grouting. Blasting with dynamite and mechanical picking were then used to excavate.
Maintenance
A 2002 report by Michitsugu Ikuma described, for the undersea section, that "the tunnel structure appears to remain in a good condition." The amount of inflow has been decreasing with time, although it "increases right after a large earthquake".
In March 2018 at 30 years of age, maintenance costs amounted to 30 billion Yen or US$286 million since 1999. Plans are to increase speed and provide mobile communication at the full track.
Structure
Initially, only narrow-gauge track was laid through the tunnel, but in 2005 the Hokkaido Shinkansen project started construction which included laying dual gauge track (providing standard gauge track capability) and extending the Shinkansen network through the tunnel. Shinkansen services to Hakodate commenced in March 2016, and are proposed to be extended to Sapporo by 2031. The tunnel has of continuous welded rail.
Two stations used to be within the tunnel—Tappi-Kaitei Station and Yoshioka-Kaitei Station. Both closed with the construction of the Hokkaido Shinkansen, but continue to serve as emergency escape points. In the event of a fire or other disaster, the stations provide the equivalent safety of a much shorter tunnel. The effectiveness of the escape shafts at the emergency stations is enhanced by having exhaust fans to extract smoke, television cameras to help route passengers to safety, thermal (infrared) fire alarm systems, and water spray nozzles. Before the construction of the Hokkaido Shinkansen, both stations contained museums detailing the history and function of the tunnel that could be visited on special sightseeing tours. The museums are now closed and the space provides storage for work on the Hokkaido Shinkansen. The two were the first railway stations in the world built under the sea.
| Technology | Tunnels | null |
403400 | https://en.wikipedia.org/wiki/Stibnite | Stibnite | Stibnite, sometimes called antimonite, is a sulfide mineral with the formula Sb2S3. This soft grey material crystallizes in an orthorhombic space group. It is the most important source for the metalloid antimony. The name is derived from the Greek through the Latin as the former name for the mineral and the element antimony.
Structure
Stibnite has a structure similar to that of arsenic trisulfide, As2S3. The Sb(III) centers, which are pyramidal and three-coordinate, are linked via bent two-coordinate sulfide ions. However, some studies suggest that the actual coordination polyhedra of antimony are SbS7, with (3+4) coordination at the M1 site and (5+2) at the M2 site. Some of the secondary bonds impart cohesion and are connected with packing. Stibnite is grey when fresh, but can turn superficially black due to oxidation in air.
Properties
The melting point of Sb2S3 is . The band gap is 1.88 eV at room temperature and it is a photoconductor. Stibnite is also toxic upon ingestion, with symptoms similar to those of arsenic poisoning.
Uses
Pastes of Sb2S3 powder in fat or in other materials have been used since c. 3000 BC as eye cosmetics in the Mediterranean and farther afield; in this use, Sb2S3 is called kohl. It was used to darken the brows and lashes, or to draw a line around the perimeter of the eye.
Antimony trisulfide finds use in pyrotechnic compositions, namely in the glitter and fountain mixtures. Needle-like crystals, "Chinese needles", are used in glitter compositions and white pyrotechnic stars. The "dark pyro" version is used in flash powders to increase their sensitivity and sharpen their report. It is also a component of modern safety matches. It was formerly used in flash compositions, but its use was abandoned due to toxicity and sensitivity to static electricity.
Stibnite was used ever since protodynastic ancient Egypt as a medication and a cosmetic. The Sunan Abi Dawood reports, “prophet Muhammad said: 'Among the best types of collyrium is antimony (ithmid) for it clears the vision and makes the hair sprout.
The 17th century alchemist Eirenaeus Philalethes, also known as George Starkey, describes stibnite in his alchemical commentary An Exposition upon Sir George Ripley's Epistle. Starkey used stibnite as a precursor to philosophical mercury, which was itself a hypothetical precursor to the philosopher's stone.
Occurrence
Stibnite occurs in hydrothermal deposits and is associated with realgar, orpiment, cinnabar, galena, pyrite, marcasite, arsenopyrite, cervantite, stibiconite, calcite, ankerite, barite and chalcedony.
Small deposits of stibnite are common, but large deposits are rare. The world's largest deposit of antimony, the Xikuangshan mine, yields high quality crystals in paragenesis with calcite. It occurs in Canada, Mexico, Peru, Japan, Germany, Romania, Italy, France, England, Algeria, and Kalimantan, Borneo. In the United States it is found in Arkansas, Idaho, Nevada, California, and Alaska.
Historically, the Romans used stibnite mined in Dacia to make colourless glass, the making of which ended when this province was lost to the Roman Empire.
As of May 2007, the largest specimen on public display (1000 pounds) is at the American Museum of Natural History. The largest documented single crystals of stibnite measured ~60×5×5 cm and originated from different locations including Japan, France and Germany.
| Physical sciences | Minerals | Earth science |
403627 | https://en.wikipedia.org/wiki/Genetic%20diversity | Genetic diversity | Genetic diversity is the total number of genetic characteristics in the genetic makeup of a species. It ranges widely, from the number of species to differences within species, and can be correlated to the span of survival for a species. It is distinguished from genetic variability, which describes the tendency of genetic characteristics to vary.
Genetic diversity serves as a way for populations to adapt to changing environments. With more variation, it is more likely that some individuals in a population will possess variations of alleles that are suited for the environment. Those individuals are more likely to survive to produce offspring bearing that allele. The population will continue for more generations because of the success of these individuals.
The academic field of population genetics includes several hypotheses and theories regarding genetic diversity. The neutral theory of evolution proposes that diversity is the result of the accumulation of neutral substitutions. Diversifying selection is the hypothesis that two subpopulations of a species live in different environments that select for different alleles at a particular locus. This may occur, for instance, if a species has a large range relative to the mobility of individuals within it. Frequency-dependent selection is the hypothesis that as alleles become more common, they become more vulnerable. This occurs in host–pathogen interactions, where a high frequency of a defensive allele among the host means that it is more likely that a pathogen will spread if it is able to overcome that allele.
Within-species diversity
A study conducted by the National Science Foundation in 2007 found that genetic diversity (within-species diversity) and biodiversity are dependent upon each other — i.e. that diversity within a species is necessary to maintain diversity among species, and vice versa. According to the lead researcher in the study, Dr. Richard Lankau, "If any one type is removed from the system, the cycle can break down, and the community becomes dominated by a single species." Genotypic and phenotypic diversity have been found in all species at the protein, DNA, and organismal levels; in nature, this diversity is nonrandom, heavily structured, and correlated with environmental variation and stress.
The interdependence between genetic and species diversity is delicate. Changes in species diversity lead to changes in the environment, leading to adaptation of the remaining species. Changes in genetic diversity, such as in loss of species, leads to a loss of biological diversity. Loss of genetic diversity in domestic animal populations has also been studied and attributed to the extension of markets and economic globalization.
Neutral and adaptive genetic diversity
Neutral genetic diversity consists of genes that do not increase fitness and are not responsible for adaptability. Natural selection does not act on these neutral genes. Adaptive genetic diversity consists of genes that increase fitness and are responsible for adaptability to changes in the environment. Adaptive genes are responsible for ecological, morphological, and behavioral traits. Natural selection acts on adaptive genes which allows the organisms to evolve. The rate of evolution on adaptive genes is greater than on neutral genes due to the influence of selection. However, it has been difficult to identify alleles for adaptive genes and thus adaptive genetic diversity is most often measured indirectly. For example, heritability can be measured as or adaptive population differentiation can be measured as . It may be possible to identify adaptive genes through genome-wide association studies by analyzing genomic data at the population level.
Identifying adaptive genetic diversity is important for conservation because the adaptive potential of a species may dictate whether it survives or becomes extinct, especially as the climate changes. This is magnified by a lack of understanding whether low neutral genetic diversity is correlated with high genetic drift and high mutation load. In a review of current research, Teixeira and Huber (2021), discovered some species, such as those in the genus Arabidopsis, appear to have high adaptive potential despite suffering from low genetic diversity overall due to severe bottlenecks. Therefore species with low neutral genetic diversity may possess high adaptive genetic diversity, but since it is difficult to identify adaptive genes, a measurement of overall genetic diversity is important for planning conservation efforts and a species that has experienced a rapid decline in genetic diversity may be highly susceptible to extinction.
Evolutionary importance of genetic diversity
Adaptation
Variation in the populations gene pool allows natural selection to act upon traits that allow the population to adapt to changing environments. Selection for or against a trait can occur with changing environment – resulting in an increase in genetic diversity (if a new mutation is selected for and maintained) or a decrease in genetic diversity (if a disadvantageous allele is selected against). Hence, genetic diversity plays an important role in the survival and adaptability of a species. The capability of the population to adapt to the changing environment will depend on the presence of the necessary genetic diversity. The more genetic diversity a population has, the more likelihood the population will be able to adapt and survive. Conversely, the vulnerability of a population to changes, such as climate change or novel diseases will increase with reduction in genetic diversity. For example, the inability of koalas to adapt to fight Chlamydia and the koala retrovirus (KoRV) has been linked to the koala's low genetic diversity. This low genetic diversity also has geneticists concerned for the koalas' ability to adapt to climate change and human-induced environmental changes in the future.
Small populations
Large populations are more likely to maintain genetic material and thus generally have higher genetic diversity. Small populations are more likely to experience the loss of diversity over time by random chance, which is an example of genetic drift. When an allele (variant of a gene) drifts to fixation, the other allele at the same locus is lost, resulting in a loss in genetic diversity. In small population sizes, inbreeding, or mating between individuals with similar genetic makeup, is more likely to occur, thus perpetuating more common alleles to the point of fixation, thus decreasing genetic diversity. Concerns about genetic diversity are therefore especially important with large mammals due to their small population size and high levels of human-caused population effects.[16]
A genetic bottleneck can occur when a population goes through a period of low number of individuals, resulting in a rapid decrease in genetic diversity. Even with an increase in population size, the genetic diversity often continues to be low if the entire species began with a small population, since beneficial mutations (see below) are rare, and the gene pool is limited by the small starting population. This is an important consideration in the area of conservation genetics, when working toward a rescued population or species that is genetically healthy.
Mutation
Random mutations consistently generate genetic variation. A mutation will increase genetic diversity in the short term, as a new gene is introduced to the gene pool. However, the persistence of this gene is dependent of drift and selection (see above). Most new mutations either have a neutral or negative effect on fitness, while some have a positive effect. A beneficial mutation is more likely to persist and thus have a long-term positive effect on genetic diversity. Mutation rates differ across the genome, and larger populations have greater mutation rates. In smaller populations a mutation is less likely to persist because it is more likely to be eliminated by drift.
Gene flow
Gene flow, often by migration, is the movement of genetic material (for example by pollen in the wind, or the migration of a bird). Gene flow can introduce novel alleles to a population. These alleles can be integrated into the population, thus increasing genetic diversity.
For example, an insecticide-resistant mutation arose in Anopheles gambiae African mosquitoes. Migration of some A. gambiae mosquitoes to a population of Anopheles coluzziin mosquitoes resulted in a transfer of the beneficial resistance gene from one species to the other. The genetic diversity was increased in A. gambiae by mutation and in A. coluzziin by gene flow.
In agriculture
In crops
When humans initially started farming, they used selective breeding to pass on desirable traits of the crops while omitting the undesirable ones. Selective breeding leads to monocultures: entire farms of nearly genetically identical plants. Little to no genetic diversity makes crops extremely susceptible to widespread disease; bacteria morph and change constantly and when a disease-causing bacterium changes to attack a specific genetic variation, it can easily wipe out vast quantities of the species. If the genetic variation that the bacterium is best at attacking happens to be that which humans have selectively bred to use for harvest, the entire crop will be wiped out.
The nineteenth-century Great Famine in Ireland was caused in part by a lack of biodiversity. Since new potato plants do not come as a result of reproduction, but rather from pieces of the parent plant, no genetic diversity is developed, and the entire crop is essentially a clone of one potato, it is especially susceptible to an epidemic. In the 1840s, much of Ireland's population depended on potatoes for food. They planted namely the "lumper" variety of potato, which was susceptible to a rot-causing oomycete called Phytophthora infestans. The fungus destroyed the vast majority of the potato crop, and left one million people to starve to death.
Genetic diversity in agriculture does not only relate to disease, but also herbivores. Similarly, to the above example, monoculture agriculture selects for traits that are uniform throughout the plot. If this genotype is susceptible to certain herbivores, this could result in the loss of a large portion of the crop. One way farmers get around this is through inter-cropping. By planting rows of unrelated, or genetically distinct crops as barriers between herbivores and their preferred host plant, the farmer effectively reduces the ability of the herbivore to spread throughout the entire plot.
In livestock
The genetic diversity of livestock species permits animal husbandry in a range of environments and with a range of different objectives. It provides the raw material for selective breeding programmes and allows livestock populations to adapt as environmental conditions change.
Livestock biodiversity can be lost as a result of breed extinctions and other forms of genetic erosion. As of June 2014, among the 8,774 breeds recorded in the Domestic Animal Diversity Information System (DAD-IS), operated by the Food and Agriculture Organization of the United Nations (FAO), 17 percent were classified as being at risk of extinction and 7 percent already extinct. There is now a Global Plan of Action for Animal Genetic Resources that was developed under the auspices of the Commission on Genetic Resources for Food and Agriculture in 2007, that provides a framework and guidelines for the management of animal genetic resources.
Awareness of the importance of maintaining animal genetic resources has increased over time. FAO has published two reports on the state of the world's animal genetic resources for food and agriculture, which cover detailed analyses of our global livestock diversity and ability to manage and conserve them.
Viral implications
High genetic diversity in viruses must be considered when designing vaccinations. High genetic diversity results in difficulty in designing targeted vaccines, and allows for viruses to quickly evolve to resist vaccination lethality. For example, malaria vaccinations are impacted by high levels of genetic diversity in the protein antigens. In addition, HIV-1 genetic diversity limits the use of currently available viral load and resistance tests.
Coronavirus populations have considerable evolutionary diversity due to mutation and homologous recombination. For example, the sequencing of 86 SARS-CoV-2 coronavirus samples obtained from infected patients revealed 93 mutations indicating the presence of considerable genetic diversity. Replication of the coronavirus RNA genome is catalyzed by an RNA-dependent RNA polymerase. During replication this polymerase may undergo template switching, a form of homologous recombination. This process which also generates genetic diversity appears to be an adaptation for coping with RNA genome damage.
Coping with low genetic diversity
Natural
The natural world has several ways of preserving or increasing genetic diversity. Among oceanic plankton, viruses aid in the genetic shifting process. Ocean viruses, which infect the plankton, carry genes of other organisms in addition to their own. When a virus containing the genes of one cell infects another, the genetic makeup of the latter changes. This constant shift of genetic makeup helps to maintain a healthy population of plankton despite complex and unpredictable environmental changes.
Cheetahs are a threatened species. Low genetic diversity and resulting poor sperm quality has made breeding and survivorship difficult for cheetahs. Moreover, only about 5% of cheetahs survive to adulthood. However, it has been recently discovered that female cheetahs can mate with more than one male per litter of cubs. They undergo induced ovulation, which means that a new egg is produced every time a female mates. By mating with multiple males, the mother increases the genetic diversity within a single litter of cubs.
Human intervention
Attempts to increase the viability of a species by increasing genetic diversity is called genetic rescue. For example, eight panthers from Texas were introduced to the Florida panther population, which was declining and suffering from inbreeding depression. Genetic variation was thus increased and resulted in a significant increase in population growth of the Florida Panther. Creating or maintaining high genetic diversity is an important consideration in species rescue efforts, in order to ensure the longevity of a population.
Measures
Genetic diversity of a population can be assessed by some simple measures.
Gene diversity is the proportion of polymorphic loci across the genome.
Heterozygosity is the fraction of individuals in a population that are heterozygous for a particular locus.
Alleles per locus is also used to demonstrate variability.
Nucleotide diversity is the extent of nucleotide polymorphisms within a population, and is commonly measured through molecular markers such as micro- and minisatellite sequences, mitochondrial DNA, and single-nucleotide polymorphisms (SNPs).
Furthermore, stochastic simulation software is commonly used to predict the future of a population given measures such as allele frequency and population size.
Genetic diversity can also be measured. The various recorded ways of measuring genetic diversity include:
Species richness is a measure of the number of species
Species abundance a relative measure of the abundance of species
Species density an evaluation of the total number of species per unit area
| Biology and health sciences | Basics_4 | Biology |
404001 | https://en.wikipedia.org/wiki/Algebraic%20equation | Algebraic equation | In mathematics, an algebraic equation or polynomial equation is an equation of the form , where P is a polynomial with coefficients in some field, often the field of the rational numbers.
For example, is an algebraic equation with integer coefficients and
is a multivariate polynomial equation over the rationals.
For many authors, the term algebraic equation refers only to the univariate case, that is polynomial equations that involve only one variable. On the other hand, a polynomial equation may involve several variables (the multivariate case), in which case the term polynomial equation is usually preferred.
Some but not all polynomial equations with rational coefficients have a solution that is an algebraic expression that can be found using a finite number of operations that involve only those same types of coefficients (that is, can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but for degree five or more it can only be done for some equations, not all. A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root-finding algorithm) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations).
Terminology
The term "algebraic equation" dates from the time when the main problem of algebra was to solve univariate polynomial equations. This problem was completely solved during the 19th century; see Fundamental theorem of algebra, Abel–Ruffini theorem and Galois theory.
Since then, the scope of algebra has been dramatically enlarged. In particular, it includes the study of equations that involve th roots and, more generally, algebraic expressions. This makes the term algebraic equation ambiguous outside the context of the old problem. So the term polynomial equation is generally preferred when this ambiguity may occur, specially when considering multivariate equations.
History
The study of algebraic equations is probably as old as mathematics: the Babylonian mathematicians, as early as 2000 BC could solve some kinds of quadratic equations (displayed on Old Babylonian clay tablets).
Univariate algebraic equations over the rationals (i.e., with rational coefficients) have a very long history. Ancient mathematicians wanted the solutions in the form of radical expressions, like for the positive solution of . The ancient Egyptians knew how to solve equations of degree 2 in this manner. The Indian mathematician Brahmagupta (597–668 AD) explicitly described the quadratic formula in his treatise Brāhmasphuṭasiddhānta published in 628 AD, but written in words instead of symbols. In the 9th century Muhammad ibn Musa al-Khwarizmi and other Islamic mathematicians derived the quadratic formula, the general solution of equations of degree 2, and recognized the importance of the discriminant. During the Renaissance in 1545, Gerolamo Cardano published the solution of Scipione del Ferro and Niccolò Fontana Tartaglia to equations of degree 3 and that of Lodovico Ferrari for equations of degree 4. Finally Niels Henrik Abel proved, in 1824, that equations of degree 5 and higher do not have general solutions using radicals. Galois theory, named after Évariste Galois, showed that some equations of at least degree 5 do not even have an idiosyncratic solution in radicals, and gave criteria for deciding if an equation is in fact solvable using radicals.
Areas of study
The algebraic equations are the basis of a number of areas of modern mathematics: Algebraic number theory is the study of (univariate) algebraic equations over the rationals (that is, with rational coefficients). Galois theory was introduced by Évariste Galois to specify criteria for deciding if an algebraic equation may be solved in terms of radicals. In field theory, an algebraic extension is an extension such that every element is a root of an algebraic equation over the base field. Transcendental number theory is the study of the real numbers which are not solutions to an algebraic equation over the rationals. A Diophantine equation is a (usually multivariate) polynomial equation with integer coefficients for which one is interested in the integer solutions. Algebraic geometry is the study of the solutions in an algebraically closed field of multivariate polynomial equations.
Two equations are equivalent if they have the same set of solutions. In particular the equation is equivalent to . It follows that the study of algebraic equations is equivalent to the study of polynomials.
A polynomial equation over the rationals can always be converted to an equivalent one in which the coefficients are integers. For example, multiplying through by 42 = 2·3·7 and grouping its terms in the first member, the previously mentioned polynomial equation becomes
Because sine, exponentiation, and 1/T are not polynomial functions,
is not a polynomial equation in the four variables x, y, z, and T over the rational numbers. However, it is a polynomial equation in the three variables x, y, and z over the field of the elementary functions in the variable T.
Theory
Polynomials
Given an equation in unknown
,
with coefficients in a field , one can equivalently say that the solutions of (E) in are the roots in of the polynomial
.
It can be shown that a polynomial of degree in a field has at most roots. The equation (E) therefore has at most solutions.
If is a field extension of , one may consider (E) to be an equation with coefficients in and the solutions of (E) in are also solutions in (the converse does not hold in general). It is always possible to find a field extension of known as the rupture field of the polynomial , in which (E) has at least one solution.
Existence of solutions to real and complex equations
The fundamental theorem of algebra states that the field of the complex numbers is closed algebraically, that is, all polynomial equations with complex coefficients and degree at least one have a solution.
It follows that all polynomial equations of degree 1 or more with real coefficients have a complex solution. On the other hand, an equation such as does not have a solution in (the solutions are the imaginary units and ).
While the real solutions of real equations are intuitive (they are the -coordinates of the points where the curve intersects the -axis), the existence of complex solutions to real equations can be surprising and less easy to visualize.
However, a monic polynomial of odd degree must necessarily have a real root. The associated polynomial function in is continuous, and it approaches as approaches and as approaches . By the intermediate value theorem, it must therefore assume the value zero at some real , which is then a solution of the polynomial equation.
Connection to Galois theory
There exist formulas giving the solutions of real or complex polynomials of degree less than or equal to four as a function of their coefficients. Abel showed that it is not possible to find such a formula in general (using only the four arithmetic operations and taking roots) for equations of degree five or higher. Galois theory provides a criterion which allows one to determine whether the solution to a given polynomial equation can be expressed using radicals.
Explicit solution of numerical equations
Approach
The explicit solution of a real or complex equation of degree 1 is trivial. Solving an equation of higher degree reduces to factoring the associated polynomial, that is, rewriting (E) in the form
,
where the solutions are then the . The problem is then to express the in terms of the .
This approach applies more generally if the coefficients and solutions belong to an integral domain.
General techniques
Factoring
If an equation of degree has a rational root , the associated polynomial can be factored to give the form (by dividing by or by writing as a linear combination of terms of the form , and factoring out . Solving thus reduces to solving the degree equation . See for example the case .
Elimination of the sub-dominant term
To solve an equation of degree ,
,
a common preliminary step is to eliminate the degree- term: by setting , equation (E) becomes
.
Leonhard Euler developed this technique for the case but it is also applicable to the case , for example.
Quadratic equations
To solve a quadratic equation of the form one calculates the discriminant Δ defined by .
If the polynomial has real coefficients, it has:
two distinct real roots if ;
one real double root if ;
no real root if , but two complex conjugate roots.
Cubic equations
The best-known method for solving cubic equations, by writing roots in terms of radicals, is Cardano's formula.
Quartic equations
For detailed discussions of some solution methods see:
Tschirnhaus transformation (general method, not guaranteed to succeed);
Bezout method (general method, not guaranteed to succeed);
Ferrari method (solutions for degree 4);
Euler method (solutions for degree 4);
Lagrange method (solutions for degree 4);
Descartes method (solutions for degree 2 or 4);
A quartic equation with may be reduced to a quadratic equation by a change of variable provided it is either biquadratic () or quasi-palindromic ().
Some cubic and quartic equations can be solved using trigonometry or hyperbolic functions.
Higher-degree equations
Évariste Galois and Niels Henrik Abel showed independently that in general a polynomial of degree 5 or higher is not solvable using radicals. Some particular equations do have solutions, such as those associated with the cyclotomic polynomials of degrees 5 and 17.
Charles Hermite, on the other hand, showed that polynomials of degree 5 are solvable using elliptical functions.
Otherwise, one may find numerical approximations to the roots using root-finding algorithms, such as Newton's method.
| Mathematics | Elementary algebra | null |
404106 | https://en.wikipedia.org/wiki/Antarctic%20toothfish | Antarctic toothfish | The Antarctic toothfish (Dissostichus mawsoni), also known as the Antarctic cod, is a large, black or brown fish found in very cold (subzero) waters of the Southern Ocean near Antarctica. It is the largest fish in the Southern Ocean, feeding on shrimp and smaller fish, and preyed on by whales, orcas, and seals. It is caught for food and marketed as Chilean sea bass together with its sister species, the more northerly Patagonian toothfish (D. eliginoides). Often mistakenly called "Antarctic cod", the Antarctic toothfish belongs to the notothen family (Nototheniidae), a family of fish genera that are abundant near Antarctica.
Name and taxonomy
The common name "toothfish" refers to the two rows of teeth in the upper jaw, thought to give it a shark-like appearance.
The genus name Dissostichus is from the Greek (twofold) and stichus (line) and refers to the presence of two long lateral lines that enable the fish to sense prey. The species name, mawsoni, honors the Australian geologist Douglas Mawson who led the 1911–1914 Australasian Antarctic Expedition that explored the Antarctic coast and obtained the species' type specimen.
The Antarctic toothfish was first formally described in 1937 from the English ichthyologist John Roxborough Norman with the type locality given as off MacRobertson Land at 66°45'S, 62°03'E in Antarctica.
Description
Fully grown, these fish (and their warmer-water relative, the Patagonian toothfish, D. eleginoides) can grow to more than in length and 135 kg in weight, twice as large as the next-largest Antarctic fish. Being large, and consistent with the unstructured food webs of the ocean (i.e., big fish eat little fish regardless of identity, even eating their own offspring), the Antarctic toothfish has been characterized as a voracious predator. Furthermore, by being by far the largest midwater fish in the Southern Ocean, it is thought to fill the ecological role that sharks play in other oceans. Aiding in that role, the Antarctic toothfish is one of only five notothenioid species that, as adults, are neutrally buoyant. This buoyancy is attained at 100–120 cm in length and enables them to spend time above the bottom without expending extra energy. Both bottom-dwelling and mid-water prey are, therefore, available to them. Most other notothenioid fish and the majority of all Antarctic fishes, including smaller toothfish, are confined to the bottom. Coloring is black to olive brown, sometimes lighter on the undersides, with a mottled pattern on body and fins. Small fish blend in very well among the benthic sponges and corals. The species has a broad head, an elongated body, long dorsal and anal fins, large pectoral fins, and a rudder-like caudal fin. They typically move slowly, but are capable of speed bursts that can elude predatory seals.
Feeding ecology
Over the continental shelf, Antarctic toothfish feed on shrimp (Nauticaris spp.) and small fish, principally another neutrally buoyant nototheniid, the Antarctic silverfish (Pleuragramma antarcticum). This loosely schooling species is also a major prey of Adélie (Pygoscelis adeliae) and emperor penguins (Aptenodytes forsteri), Weddell seals (Leptonychotes weddellii) and Antarctic minke whales (Balaenoptera bonaerensis). Therefore, competition for prey among toothfish and these other mesopredators (middle trophic level predators) could be very important. The large Antarctic toothfish are eaten by sperm whales (Physeter macrocephalus), killer whales (Orcinus orca), Weddell seals, and possibly colossal squid (Mesonychoteuthis hamiltoni). Toothfish that are dwelling on the bottom, particularly those caught during the summer on the continental slope, eat mainly grenadiers (Macrouridae), but also feed on other smaller fish species and skates (Raja spp.). They also feed on the colossal squid. Antarctic toothfish have been caught to depths of 2200 m, though based on commercial fishing effort, few occur that deep.
Aging and reproduction
Aging data indicate Antarctic toothfish are relatively fast-growing when young, but then growth slows later in life. They reach about one-third of maximum size after 5 years, and half maximum by 10 years, after which growth slows considerably. To grow fast when small is an adaptation of most predatory fish, e.g., sharks, so as not to be small for very long. The maximum age recorded so far has been 48 years. Antarctic toothfish take a long time to mature (13 years for males, 17 years for females) and once mature may not spawn every year, though the actual spawning interval is unknown. Only a few Antarctic toothfish with mature eggs have ever been caught, meaning knowledge is sparse about fecundity. They spawn sometime during winter. Large, mature, older fish have been caught among the seamounts of the Pacific-Antarctic Ridge, a location thus thought to be important for spawning. Smaller, subadult Antarctic toothfish tend to concentrate in shallower waters on the continental shelf, while a large portion of the older fish are found on in the continental slope. This sequestering by size and age could be another adaptation for small fish to avoid being eaten by large ones. The recruitment potential of Antarctic toothfish, a measure of both fecundity and survival to spawning age, is not known.
Anatomy and physiology
The Antarctic toothfish has a lightweight, partially cartilaginous skeleton, lacks a swim bladder, and has fatty deposits which act as a stored energy source, particularly during spawning. This fat also makes large toothfish neutrally buoyant. Many toothfish caught over the seamounts are very depleted of fat, and this is thought perhaps to be related to spawning and spawning migration, which are energy-demanding activities. It is not known what happens to these fat-depleted fish, including whether they reach, or how long it takes them to reach, breeding condition again; this ostensibly occurs upon returning to continental-slope waters. Antarctic toothfish have vision and lateral line systems well adapted to find prey in low light levels. Since ice covers the surface of the ocean where Antarctic toothfish occur even in summer, these sensory specializations likely evolved to enable survival in the reduced light levels found under ice and in the Antarctic winter, as well as at deep depths. Antarctic toothfish also have a very well developed sense of smell, which is why they are easily caught by baited hooks and also scavenge the remains of penguins killed by other predators.
Cold adaptation
The Antarctic toothfish lives in subzero degree water below latitude 60°S. It is noteworthy, like most other Antarctic notothenioids, for producing antifreeze glycoproteins, a feature not seen in its closest relative, the Patagonian toothfish, which typically inhabits slightly warmer waters. The presence of antifreeze glycoproteins allows the Antarctic toothfish (and other notothenioids) to thrive in subzero waters of the Southern Ocean surrounding Antarctica. The Antarctic toothfish's voracious appetite also is important in coping with cold water. It is mainly caught in the Ross Sea in the austral summer, but has also been recorded from Antarctic coastal waters south of the Indian Ocean sector, in the vicinity of the Antarctic Peninsula, and near the South Sandwich Islands.
Fishery and associated ecosystem
A fishery for Antarctic toothfish, managed by the Convention for the Conservation of Antarctic Marine Living Resources (CCAMLR), has existed since 1997. The existence of this fishery in the Ross Sea, the area where most Antarctic toothfish are caught, is very contentious - the main argument proposed for this is the lack of accurate population parameters, such as original stock size, fecundity, and recruitment. Moreover, the main fishing grounds are presumed by some researchers to cover the area through which the entire stock of Antarctic toothfish pass. Typically, the fishing season has finished in the area by the end of February and for the remainder of the year, much of the area is covered by sea ice, providing a natural impediment to fishing. This fishery is characterised by opponents as being a challenge to manage owing to the nature of benthic longline fishing. The bycatch of other fish can also be significant, with the ratio of toothfish caught ranging from 4.5% to 17.9% and averaging 9.3% from the 1999/2000 fishing season to 2013/14 in CCAMLR Subarea 88.1 when the toothfish catch first exceeded 50 tonnes and from 2.3% to 24.5% averaging 12.4% in CCAMLR Subarea 88.2 up to the latest publicly available figure from 2013/14. The bycatch of other fish species is also regulated to a maximum amount annually by CCAMLR. CCAMLR decision rules are based on determining the catch level that will ensure that the median estimated spawning stock biomass (not total biomass) is greater than or equal to 50% of the average pre-exploitation spawning biomass after a further 35 years of fishing (i.e. 35 years from each year of assessment), with the additional condition that the probability is less than a 10% that the spawning biomass will decline below 20% of the pre-exploitation level at any time during this period. Current spawning stock biomass for Antarctic toothfish in the Ross Sea Region is estimated to be at 75% of the pre-exploitation level (95% Bayesian probability interval 71–78%), well above the 50% target reference point.
An independent study was reported to have detected the disappearance of large fish at the southern periphery of its range in the McMurdo Sound and was postulated to be consistent with this apparent loss of large fish. However, more recent work has shown this was not the case in 2014. Some studies have reported that the prevalence of fish-eating killer whales has been apparently decreasing in the southern Ross Sea, foraging efficiency of Weddell seals is decreasing, and numbers of Adélie penguins (competitors for Antarctic silverfish) have been increasing. More recent studies have confirmed visual sightings of Weddell seals and Type-C killer whales holding and consuming large toothfish in the McMurdo Sound area and raise questions over the previously assumed importance of assumed dominance of Antarctic silverfish (Pleuragramma antarcticum) in the diet of Weddell seal and Type-C killer whales. These reports highlight the importance of managing this fishery in the best interests of the ecosystem by continuing to collect information on both Antarctic toothfish life history and the interaction of that species with predators and prey. An important research programme in this regard is the annual 'Shelf' survey carried out annually since 2012, which is designed to monitor the abundance of subadult Antarctic toothfish in areas where subadult-sized fish have been regularly found (e.g., in the southern Ross Sea) has been designed provide data to better estimate recruitment variability and provide an important early-warning signal of changes in toothfish recruitment. The project also is used for additional targeted data collection to better understand the lifecycle and ecosystem role of Antarctic toothfish.
Research has provided evidence for long-distance migrations of type-C killer whales between the Ross Sea and New Zealand waters, indicating a much wider range that had been postulated by a number of scientists. One adult female type-C killer whale has been seen in both New Zealand waters and McMurdo Sound, Antarctica, and a high large proportion of type-C killer whales sighted in McMurdo Sound have scars caused by cookiecutter sharks that are currently assumed to be limited to north of 50°S. At the same time as this study was occurring, Italian whale experts at Terra Nova Bay, about 360 km north of Scott Base, deployed satellite transmitters on type-C killer whales to determine the whales' movements. Their results independently verified that type-C killer whales were commuting between Scott Base and the waters off Northland.
The total catch of Antarctic toothfish in 2013–14 was 3820 tonnes; 3,320 tonnes of this were taken from the Ross Sea (FAO Statistical Divisions 88.1 and 88.2), with the remainder taken from other high seas areas within the CCAMLR convention area.
Management
The ecosystem approach to fishing is encapsulated in Article II of the CAMLR Convention. The ecosystem approach uses decision rules based on both population status targets and limit reference points, and incorporates uncertainty and ecosystem status in the calculation of these targets. Different reference points to account for the needs of dependent predators in the ecosystem are used depending on the location of the species in the food web. The ecosystem fisheries management approach by CCAMLR involves use of move-on rules to protect trophic interactions, and limit direct effects of fishing on fish bycatch, seabirds, and vulnerable marine ecosystems. Annually reviewed mitigation measures such as line weighting and streamer lines minimize seabird bycatch, which have resulted in a substantial reduction in accidental seabird mortalities in the CAMLR Convention Area. The 50% (target) and 20% (limit) reference points used by the CCAMLR decision rules exceed the requirements for target and limit reference points set by almost all national and international fisheries management organizations, even for species longer lived than toothfish. A wide study of many fisheries generally indicated that most reach maximum sustainable yield at 30–35% of their pre-exploitation abundances. CCAMLR uses a more conservative reference level to allow exploitation at a level where toothfish recruitment and the ecosystem in general is not appreciably impacted. This is required by Article II of the CAMLR Convention. A common misunderstanding of the CCAMLR decision rules is an assumption that the decline in population size will follow a clear trajectory from the starting year to a point 35 years later when the stock size will reach 50% of pre-exploitation levels and an assumption that no feedback occurs during each assessment. The catch limit, though, is recalculated based on all updated or revised data at each annual or biennial assessment. This approach is used to ensure that the 50% level will be approached slowly and enables an ongoing readjustment of catch levels as knowledge improves.
Environment and bycatch
CCAMLR imposes stringent environmental protection and bycatch mitigation measures to Antarctic toothfish fisheries, including:
Monitoring of daytime setting and movement of vessels from the fishery should any vessel catch more than three seabirds
Use of streamer lines during setting to keep birds away from baited hooks
Weighting of lines to ensure fast sink rates to prevent seabirds from accessing baited hooks
The use of bird exclusion devices to prevent birds from accessing hooks whilst lines are being hauled
Limitations on the release of fish offal overboard at the same time as setting and hauling of lines to avoid attracting seabirds: An additional requirement prohibits the dumping of all offal south of 60°S, the region where Antarctic toothfish are caught
Prohibition on the dumping of oil, plastic, garbage, food waste, poultry, eggs or eggshells, sewage, and ash by fishing vessels
Prohibition of the use of plastic packaging bands on fishing vessels
Incidental mortality of seabirds as a result of fishing has fallen to near-zero levels in the CCAMLR convention area. No mortality of seabirds or marine mammals was recorded as a result of fishing for Antarctic toothfish in 2011–12 and only two seabirds (southern giant petrels Macronectes giganteus) have been killed as a result of fishing in the Ross Sea since 1996/97.
Compliance
Compliance measures adopted by CCAMLR apply to all Antarctic toothfish fisheries. These include:
At-sea inspections of fishing vessels
Vessel licensing
Port inspections of fishing vessels
Continuous reporting of fishing vessel positions via satellite-linked vessel monitoring systems
Catch documentation scheme for toothfish, which tracks toothfish from the point of landing through to the final point of sale and requires verification and authorisation by government authorities at each step
The requirement to carry two scientific observers on each licensed vessel – including one from a member state other than the vessel flag
Sustainability
In November 2010, the Marine Stewardship Council (MSC) certified the Ross Sea Antarctic toothfish fishery as a sustainable and well-managed fishery. The certification is contentious, with many conservation groups protesting the certification due to the paucity of information needed to reliably manage the fishery, and that only eight of the 19 vessels in the fishery during the latest year for which data are publicly available were certified. During the 2013–14 season, vessels operating under the Marine Stewardship Certification landed 51.3% of all Antarctic toothfish from the Ross Sea Region (CCAMLR Subarea 88.1) and 64.7% of Antarctic toothfish from the Amundsen Sea sector (CCAMLR Subarea 88.2).
The argument that only a portion of Antarctic toothfish is certified, the high price it commands, and the remote areas where a large proportion of the fish are caught have been advanced as an encouragement to illegal, unreported, and unregulated (IUU) fishing and mislabeling. A 2011 genetic study of MSC-labeled Antarctic toothfish found in markets revealed a significant proportion was not from the MSC-certified stock, and many were not toothfish at all. The MSC had conducted its own internal study, which found no evidence of mislabeling. The MSC conducts an annual audit of the fishery which includes sampling of certified product.
Due to the challenges that faced toothfish management in the 1990s and early 2000s (e.g., IUU fishing, mislabeling, and inadequate data for management), consumer seafood guides such as Seafood Watch placed toothfish of both species (Chilean seabass) on their red, or "avoid", list; however, in light of up-to-date, internationally peer-reviewed scientific information, in April 2013, Seafood Watch upgraded the Ross Sea Antarctic toothfish fishery to a "good alternative". Following a comprehensive review in 2012, the Monterey Bay Aquarium revised its rating of Antarctic toothfish to 'good alternative'.
Greenpeace International added the Antarctic toothfish to its seafood red list in 2010. This approach is at variance with the high score given the fishery when it was granted certification by the MSC.
| Biology and health sciences | Acanthomorpha | Animals |
404130 | https://en.wikipedia.org/wiki/Piecewise%20function | Piecewise function | In mathematics, a piecewise function (also called a piecewise-defined function, a hybrid function, or a function defined by cases) is a function whose domain is partitioned into several intervals ("subdomains") on which the function may be defined differently. Piecewise definition is actually a way of specifying the function, rather than a characteristic of the resulting function itself.
Terms like piecewise linear, piecewise smooth, piecewise continuous, and others are very common. The meaning of a function being piecewise , for a property is roughly that the domain of the function can be partitioned into pieces on which the property holds, but is used slightly differently by different authors. Sometimes the term is used in a more global sense involving triangulations; see Piecewise linear manifold.
Notation and interpretation
Piecewise functions can be defined using the common functional notation, where the body of the function is an array of functions and associated subdomains. A semicolon or comma may follow the subfunction or subdomain columns. The or is rarely omitted at the start of the right column.
The subdomains together must cover the whole domain; often it is also required that they are pairwise disjoint, i.e. form a partition of the domain. In order for the overall function to be called "piecewise", the subdomains are usually required to be intervals (some may be degenerated intervals, i.e. single points or unbounded intervals). For bounded intervals, the number of subdomains is required to be finite, for unbounded intervals it is often only required to be locally finite. For example, consider the piecewise definition of the absolute value function:
For all values of less than zero, the first sub-function () is used, which negates the sign of the input value, making negative numbers positive. For all values of greater than or equal to zero, the second sub-function is used, which evaluates trivially to the input value itself.
The following table documents the absolute value function at certain values of :
In order to evaluate a piecewise-defined function at a given input value, the appropriate subdomain needs to be chosen in order to select the correct sub-function—and produce the correct output value.
Examples
A step function or piecewise constant function, composed of constant sub-functions
Piecewise linear function, composed of linear sub-functions
Broken power law, a function composed of power-law sub-functions
Spline, a function composed of polynomial sub-functions, often constrained to be smooth at the joints between pieces
B-spline
PDIFF
and some other common Bump functions. These are infinitely differentiable, but analyticity holds only piecewise.
Continuity and differentiability of piecewise-defined functions
A piecewise-defined function is continuous on a given interval in its domain if the following conditions are met:
its sub-functions are continuous on the corresponding intervals (subdomains),
there is no discontinuity at an endpoint of any subdomain within that interval.
The pictured function, for example, is piecewise-continuous throughout its subdomains, but is not continuous on the entire domain, as it contains a jump discontinuity at . The filled circle indicates that the value of the right sub-function is used in this position.
For a piecewise-defined function to be differentiable on a given interval in its domain, the following conditions have to fulfilled in addition to those for continuity above:
its sub-functions are differentiable on the corresponding open intervals,
the one-sided derivatives exist at all intervals' endpoints,
at the points where two subintervals touch, the corresponding one-sided derivatives of the two neighboring subintervals coincide.
Some sources only examine the function definition, while others acknowledge the property iff the function admits a partition into a piecewise definition that meets the conditions.
Applications
In applied mathematical analysis, "piecewise-regular" functions have been found to be consistent with many models of the human visual system, where images are perceived at a first stage as consisting of smooth regions separated by edges (as in a cartoon);
a cartoon-like function is a C2 function, smooth except for the existence of discontinuity curves.
In particular, shearlets have been used as a representation system to provide sparse approximations of this model class in 2D and 3D.
Piecewise defined functions are also commonly used for interpolation, such as in nearest-neighbor interpolation.
| Mathematics | Functions: General | null |
404170 | https://en.wikipedia.org/wiki/Koku | Koku | The is a Chinese-based Japanese unit of volume. 1 koku is equivalent to 10 or approximately , or about of rice. It converts, in turn, to 100 shō and 1000 gō. One gō is the traditional volume of a single serving of rice (before cooking), used to this day for the plastic measuring cup that is supplied with commercial Japanese rice cookers.
The koku in Japan was typically used as a dry measure. The amount of rice production measured in koku was the metric by which the magnitude of a feudal domain (han) was evaluated. A feudal lord was only considered daimyō class when his domain amounted to at least 10,000 koku. As a rule of thumb, one koku was considered a sufficient quantity of rice to feed one person for one year.
The Chinese equivalent or cognate unit for capacity is the shi or dan () also known as hu (), now approximately 103 litres but historically about .
Chinese equivalent
The Chinese dan is equal to 10 dou () "pecks", 100 sheng () "pints". While the current dan is 103 litres in volume, the dan of the Tang dynasty (618–907) period equalled 59.44 litres.
Modern unit
The exact modern is calculated to be 180.39 litres, 100 times the capacity of a modern . This modern is essentially defined to be the same as the from the Edo period (1600–1868), namely 100 times the equal to 64827 cubic in the traditional measuring system.
Origin of the modern unit
The , the semi-official one measuring box since the late 16th century under Daimyo Nobunaga, began to be made in a different (larger) size in the early Edo period, sometime during the 1620s. Its dimensions, given in the traditional Japanese length unit system, were 4 9 square times 2 7 depth. Its volume, which could be calculated by multiplication was:
1 = 100 = 100 × (49 × 49 × 27 ) = 100 × 64,827 cubic
Although this was referred to as or the "new" measuring cup in its early days, its use supplanted the old measure in most areas in Japan, until the only place still left using the old cup ("") was the city of Edo, and the Edo government passed an edict declaring the the official nationwide measure standard in 1669 (Kanbun 9).
Modern measurement enactment
When the 1891 Japanese was promulgated, it defined the unit as the capacity of the standard of 64827 cubic . The same act also defined the length as metre. The metric equivalent of the modern is litres. The modern is therefore litres, or 180.39 litres.
The modern defined here is set to equal the so-called ( or "compromise "), measuring 302.97 mm, a middle-ground value between two different standards. A researcher has pointed out that the () cups ought to have used which were 0.2% longer. However, the actual measuring cups in use did not quite attain the metric, and when the Japanese Ministry of Finance had collected actual samples of from the (measuring-cup guilds) of both eastern and western Japan, they found that the measurements were close to the average of and .
Lumber koku
The "lumber " or "maritime " is defined as equal to 10 cubic in the lumber or shipping industry, compared with the standard measures 6.48 cubic . A lumber is conventionally accepted as equivalent to 120 board feet, but in practice may convert to less. In metric measures 1 lumber is about .
Historic use
The exact measure now in use was devised around the 1620s, but not officially adopted for all of Japan until the Kanbun era (1660s).
Feudal Japan
Under the Tokugawa shogunate (1603–1868) of the Edo period of Japanese history, each feudal domain had an assessment of its potential income known as kokudaka (production yield) which in part determined its order of precedence at the Shogunal court. The smallest kokudaka to qualify the fief-holder for the title of daimyō was 10,000 koku (worth ) and Kaga han, the largest fief (other than that of the shōgun), was called the "million-koku domain". Its holdings totaled around 1,025,000 koku (worth ). Many samurai, including hatamoto (a high-ranking samurai), received stipends in koku, while a few received salaries instead.
The kokudaka was reported in terms of brown rice (genmai) in most places, with the exception of the land ruled by the Satsuma clan which reported in terms of unhusked or non-winnowed rice (. Since this practice had persisted, past Japanese rice production statistics need to be adjusted for comparison with other countries that report production by milled or polished rice.
Even in certain parts of the Tōhoku region or Ezo (Hokkaidō), where rice could not be grown, the economy was still measured in terms of koku, with other crops and produce converted to their equivalent value in terms of rice. The kokudaka was not adjusted from year to year, and thus some fiefs had larger economies than their nominal koku indicated, due to land reclamation and new rice field development, which allowed them to fund development projects.
As measure of cargo ship class
Koku was also used to measure how much a ship could carry when all its loads were rice. Smaller ships carried 50 koku () while the biggest ships carried over 1,000 koku (). The biggest ships were larger than military vessels owned by the shogunate.
In popular culture
The Hyakumangoku Matsuri (Million-Koku Festival) in Kanazawa, Japan celebrates the arrival of daimyō Maeda Toshiie into the city in 1583, although Maeda's income was not raised to over a million koku until after the Battle of Sekigahara in 1600.
In fiction
The James Clavell novel Shōgun uses the Koku measure extensively as a plot device by many of the main characters as a method of reward, punishment and enticement. While fiction, it shows the importance of the fief, the rice measure and payments.
Explanatory notes
| Physical sciences | Volume | Basics and measurement |
404412 | https://en.wikipedia.org/wiki/Bayesian%20statistics | Bayesian statistics | Bayesian statistics ( or ) is a theory in the field of statistics based on the Bayesian interpretation of probability, where probability expresses a degree of belief in an event. The degree of belief may be based on prior knowledge about the event, such as the results of previous experiments, or on personal beliefs about the event. This differs from a number of other interpretations of probability, such as the frequentist interpretation, which views probability as the limit of the relative frequency of an event after many trials. More concretely, analysis in Bayesian methods codifies prior knowledge in the form of a prior distribution.
Bayesian statistical methods use Bayes' theorem to compute and update probabilities after obtaining new data. Bayes' theorem describes the conditional probability of an event based on data as well as prior information or beliefs about the event or conditions related to the event. For example, in Bayesian inference, Bayes' theorem can be used to estimate the parameters of a probability distribution or statistical model. Since Bayesian statistics treats probability as a degree of belief, Bayes' theorem can directly assign a probability distribution that quantifies the belief to the parameter or set of parameters.
Bayesian statistics is named after Thomas Bayes, who formulated a specific case of Bayes' theorem in a paper published in 1763. In several papers spanning from the late 18th to the early 19th centuries, Pierre-Simon Laplace developed the Bayesian interpretation of probability. Laplace used methods now considered Bayesian to solve a number of statistical problems. While many Bayesian methods were developed by later authors, the term "Bayesian" was not commonly used to describe these methods until the 1950s. Throughout much of the 20th century, Bayesian methods were viewed unfavorably by many statisticians due to philosophical and practical considerations. Many of these methods required much computation, and most widely used approaches during that time were based on the frequentist interpretation. However, with the advent of powerful computers and new algorithms like Markov chain Monte Carlo, Bayesian methods have gained increasing prominence in statistics in the 21st century.
Bayes's theorem
Bayes's theorem is used in Bayesian methods to update probabilities, which are degrees of belief, after obtaining new data. Given two events and , the conditional probability of given that is true is expressed as follows:
where . Although Bayes's theorem is a fundamental result of probability theory, it has a specific interpretation in Bayesian statistics. In the above equation, usually represents a proposition (such as the statement that a coin lands on heads fifty percent of the time) and represents the evidence, or new data that is to be taken into account (such as the result of a series of coin flips). is the prior probability of which expresses one's beliefs about before evidence is taken into account. The prior probability may also quantify prior knowledge or information about . is the likelihood function, which can be interpreted as the probability of the evidence given that is true. The likelihood quantifies the extent to which the evidence supports the proposition . is the posterior probability, the probability of the proposition after taking the evidence into account. Essentially, Bayes's theorem updates one's prior beliefs after considering the new evidence .
The probability of the evidence can be calculated using the law of total probability. If is a partition of the sample space, which is the set of all outcomes of an experiment, then,
When there are an infinite number of outcomes, it is necessary to integrate over all outcomes to calculate using the law of total probability. Often, is difficult to calculate as the calculation would involve sums or integrals that would be time-consuming to evaluate, so often only the product of the prior and likelihood is considered, since the evidence does not change in the same analysis. The posterior is proportional to this product:
The maximum a posteriori, which is the mode of the posterior and is often computed in Bayesian statistics using mathematical optimization methods, remains the same. The posterior can be approximated even without computing the exact value of with methods such as Markov chain Monte Carlo or variational Bayesian methods.
Bayesian methods
The general set of statistical techniques can be divided into a number of activities, many of which have special Bayesian versions.
Bayesian inference
Bayesian inference refers to statistical inference where uncertainty in inferences is quantified using probability. In classical frequentist inference, model parameters and hypotheses are considered to be fixed. Probabilities are not assigned to parameters or hypotheses in frequentist inference. For example, it would not make sense in frequentist inference to directly assign a probability to an event that can only happen once, such as the result of the next flip of a fair coin. However, it would make sense to state that the proportion of heads approaches one-half as the number of coin flips increases.
Statistical models specify a set of statistical assumptions and processes that represent how the sample data are generated. Statistical models have a number of parameters that can be modified. For example, a coin can be represented as samples from a Bernoulli distribution, which models two possible outcomes. The Bernoulli distribution has a single parameter equal to the probability of one outcome, which in most cases is the probability of landing on heads. Devising a good model for the data is central in Bayesian inference. In most cases, models only approximate the true process, and may not take into account certain factors influencing the data. In Bayesian inference, probabilities can be assigned to model parameters. Parameters can be represented as random variables. Bayesian inference uses Bayes' theorem to update probabilities after more evidence is obtained or known.
Statistical modeling
The formulation of statistical models using Bayesian statistics has the identifying feature of requiring the specification of prior distributions for any unknown parameters. Indeed, parameters of prior distributions may themselves have prior distributions, leading to Bayesian hierarchical modeling, also known as multi-level modeling. A special case is Bayesian networks.
For conducting a Bayesian statistical analysis, best practices are discussed by van de Schoot et al.
For reporting the results of a Bayesian statistical analysis, Bayesian analysis reporting guidelines (BARG) are provided in an open-access article by John K. Kruschke.
Design of experiments
The Bayesian design of experiments includes a concept called 'influence of prior beliefs'. This approach uses sequential analysis techniques to include the outcome of earlier experiments in the design of the next experiment. This is achieved by updating 'beliefs' through the use of prior and posterior distribution. This allows the design of experiments to make good use of resources of all types. An example of this is the multi-armed bandit problem.
Exploratory analysis of Bayesian models
Exploratory analysis of Bayesian models is an adaptation or extension of the exploratory data analysis approach to the needs and peculiarities of Bayesian modeling. In the words of Persi Diaconis:
The inference process generates a posterior distribution, which has a central role in Bayesian statistics, together with other distributions like the posterior predictive distribution and the prior predictive distribution. The correct visualization, analysis, and interpretation of these distributions is key to properly answer the questions that motivate the inference process.
When working with Bayesian models there are a series of related tasks that need to be addressed besides inference itself:
Diagnoses of the quality of the inference, this is needed when using numerical methods such as Markov chain Monte Carlo techniques
Model criticism, including evaluations of both model assumptions and model predictions
Comparison of models, including model selection or model averaging
Preparation of the results for a particular audience
All these tasks are part of the Exploratory analysis of Bayesian models approach and successfully performing them is central to the iterative and interactive modeling process. These tasks require both numerical and visual summaries.
| Mathematics | Statistics | null |
404489 | https://en.wikipedia.org/wiki/Elephant%20seal | Elephant seal | Elephant seals or sea elephants are very large, oceangoing earless seals in the genus Mirounga. Both species, the northern elephant seal (M. angustirostris) and the southern elephant seal (M. leonina), were hunted to the brink of extinction for lamp oil by the end of the 19th century, but their numbers have since recovered. They can weigh up to . Despite their name, elephant seals are not closely related to elephants, and the large proboscis or trunk that males have was convergently evolved.
The northern elephant seal, somewhat smaller than its southern relative, ranges over the Pacific coast of the U.S., Canada and Mexico. The most northerly breeding location on the Pacific Coast is at Race Rocks Marine Protected Area, at the southern tip of Vancouver Island in the Strait of Juan de Fuca. The southern elephant seal is found in the Southern Hemisphere on islands such as South Georgia and Macquarie Island, and on the coasts of New Zealand, Tasmania, South Africa, and Argentina in the Peninsula Valdés. In southern Chile, there is a small colony of 120 animals at Jackson Bay (Bahía Jackson) in Admiralty Sound (Seno Almirantazgo) on the southern coast of Isla Grande de Tierra del Fuego.
The oldest known unambiguous elephant seal fossils are fragmentary fossils of a member of the tribe Miroungini described from the late Pliocene Petane Formation of New Zealand. Teeth originally identified as representing an unnamed species of Mirounga have been found in South Africa, and dated to the Miocene epoch; however, Boessenecker and Churchill (2016) considered these teeth almost certainly to be misidentified toothed whale (odontocete) teeth. The elephant seals evolved in the Pacific Ocean during the Pliocene period.
Elephant seals breed annually and are seemingly habitual to colonies that have established breeding areas.
Taxonomy
John Edward Gray established the genus Mirounga in 1827. The generic name Mirounga is a Latinization of miouroung, which is said to have been a term for the seal in an Australian Aboriginal language. However, it is not known which language this represents.
Description
Elephant seals are marine mammals classified under the order Pinnipedia, which, in Latin, means feather- or fin-footed. Elephant seals are considered true seals, and fall under the family Phocidae. Phocids (true seals) are characterized by having no external ear and reduced limbs. The reduction of their limbs helps them be more streamlined and move easily in the water. However, it makes navigating on land more difficult because they cannot turn their hind flippers forward to walk like the otariids. In addition, the hind flippers of elephant seals have a lot of surface area, which helps propel them in the water.
Elephant seals spend the majority of their life (90%) underwater in search of food, and can cover a day when they head out to sea. When elephant seals are born, they can weigh up to and reach lengths up to . Sexual dimorphism is extreme, with male elephant seals weighing up to 10 times more than females, and having a prominent proboscis.
Elephant seals take their name from the large proboscis of the adult male (bull), reminiscent of an elephant's trunk, and considered a secondary sexual characteristic. The bull's proboscis is used in producing extraordinarily loud roaring noises, especially during the mating season. More importantly, however, the nose acts as a sort of rebreather, filled with cavities that reabsorb moisture from their exhalations. This is important during the mating season when the seals do not leave the beach to feed, and must conserve body moisture as there is no incoming source of water.
They are very much larger than other pinnipeds, with southern elephant seal bulls typically reaching a length of and a weight of , and are much larger than the adult females (cows), with some exceptionally large males reaching up to in length and weighing ; cows typically measure about and . Northern elephant seal bulls reach a length of and the heaviest weigh about .
The northern and southern elephant seal can be distinguished by various external features. On average, the southern elephant seal tends to be larger than the northern species. Adult male elephant seals belonging to the northern species tend to have a larger proboscis, and thick chest area with a red coloration compared to the southern species. Females do not have the large proboscis and can be distinguished between species by looking at their nose characteristics. Southern females tend to have a smaller, blunt nose compared to northern females.
Extant species distributions
Physiology
Elephant seals spend up to 80% of their lives in the ocean. They can hold their breath for more than 100 minutes – longer than any other noncetacean mammal. Elephant seals dive to beneath the ocean's surface (the deepest recorded dive of an elephant seal is by a southern elephant seal, while the record for the northern elephant seal is ). The average depth of their dives is about , typically for around 20 minutes for females and 60 minutes for males, as they search for their favorite foods, which are skates, rays, squid, octopuses, eels, small sharks and large fish. Their stomachs also often contain gastroliths. They spend only brief amounts of time at the surface to rest between dives (2–3 minutes). Females tend to dive a bit deeper due to their prey source.
Elephant seals are shielded from extreme cold more by their blubber than by fur. Their hair and outer layers of skin molt in large patches. The skin has to be regrown by blood vessels reaching through the blubber. When molting occurs, the seal is susceptible to the cold, and must rest on land, in a safe place called a "haul out". Northern males and young adults haul out during June to July to molt; northern females and immature seals during April to May.
Elephant seals have a very large volume of blood, allowing them to hold a large amount of oxygen for use when diving. They have large sinuses in their abdomens to hold blood and can also store oxygen in their muscles with increased myoglobin concentrations in muscle. In addition, they have a larger proportion of oxygen-carrying red blood cells. These adaptations allow elephant seals to dive to such depths and remain underwater for up to two hours.
Unlike some other marine mammals, such as dolphins, elephant seals do not have unihemispheric slow-wave sleep. Instead they sleep deeply for a little less than 20 minutes at the time while sinking through the water to depths that has been measured to 377 meters. When being near the continental shelf, where the ocean is less deep, they will often reach bottom, which sometimes wakes them up. But more often they continue to sleep on the seabed. On average, they get about two hours of sleep a day over a period of seven months, which is among the lowest amount of sleep of any mammal.
They are able to slow down their heartbeat (bradycardia) and divert blood flow from the external areas of the body to important core organs. They can also slow down their metabolism while performing deep dives.
Elephant seals have a helpful feature in their bodies known as the countercurrent heat exchanger to help conserve energy and prevent heat loss. In this system, arteries and veins are organized in a way to maintain a constant body temperature by having the cool blood flowing to the heart warmed by blood going to external areas of the animal.
Milk produced by elephant seals is remarkably high in milkfat compared to other mammals. After an initially lower state, it rises to over 50% milkfat (human breast milk is about 4% milkfat, and cow milk is about 3.5% milkfat).
Adaptations
Elephant seals have large circular eyes that have more rods than cones to help them see in low light conditions when they are diving. These seals also possess a structure called the tapetum lucidum, which helps their vision by having light reflected back to the retina to allow more chances for photoreceptors to detect light.
Their body is covered in blubber, which helps them keep warm and reduce drag while they are swimming. The shape of their body also helps them maneuver well in the water, but limits their movement on land. Also, elephant seals have the ability to fast for long periods of time while breeding or molting. The turbinate process, another unique adaptation, is very beneficial when these seals are fasting, breeding, molting, or hauling out. This unique nasal structure recycles moisture when they breathe and helps prevent water loss.
Elephant seals have external whiskers called vibrissae to help them locate prey and navigate their environment. The vibrissae are connected to blood vessels, nerves, and muscles making them an important sensing tool.
Due to evolutionary changes, their ear has been modified to work extremely well underwater. The structure of the inner ear helps amplify incoming sounds, and allows these seals to have good directional hearing due to the isolation of the inner ear. In addition to these adaptations, tissues in the ear canal allow the pressure in the ear to be adjusted while these seals perform their deep dives.
Breeding season
Males arrive at potential breeding sites in spring, and fast to ensure that they can mate with as many females as possible. Male elephant seals use fighting, vocalisations, and different positions to determine the dominant males. By the time males reach eight to nine years of age, they have developed a pronounced long nose, in addition to a chest shield, which is thickened skin in their chest area. They display their dominance by showing their noses, making loud vocalisations, and altering their postures. They fight each other by raising themselves and ramming each other with their chests and teeth.
By the time females arrive, each dominating male has already established his territory on the beach. Females cluster in groups called harems, which consist of up to 50 females surrounding one alpha male. Outside of these groups, a beta bull is normally roaming around on the beach. The beta bull helps the alpha by preventing other males accessing the females. In return, the beta bull might have an opportunity to mate with one of the females while the alpha is occupied.
Birth on average only takes a few minutes, and the mother and pup have a connection due to each other's unique smell and sound. The mothers will fast and nurse up to 28 days, providing their pups with rich milk. The last two to three days, however, females will be ready to mate, and the dominant males will pounce on the opportunity. Males and females lose up to a third of their body weight during the breeding season. The gestation period for females is 11 months, and the pupping seasons lasts from mid to late summer. The new pups will spend up to 10 additional weeks on land learning how to swim and dive.
Life history
The average lifespan of a northern elephant seal is 9 years, while the average lifespan of a southern elephant seal is 21 years. Males reach maturity at five to six years, but generally do not achieve alpha status until the age of eight, with the prime breeding years being between ages 9 and 12. The longest life expectancy of a male northern elephant seal is approximately 14 years.
Females begin breeding at age 3–6, and have one pup per breeding attempt. Most adult females breed each year. Breeding success is much lower for first-time mothers relative to experienced breeders. Annual survival probability of adult females is 0.83 for experienced breeding females, but only 0.66 for first-time breeders indicating a significant cost of reproduction. More male pups are produced than female pups in years with warmer sea surface temperature in the northeastern Pacific Ocean.
Females and males utilize different feeding strategies in order to maximize their reproductive success. Males feed in benthic regions with more abundant food sources, but also more abundant predators. Females feed in pelagic regions where they are less likely to find prey, but also less likely to be preyed upon. They employ these different strategies because females are smaller, requiring less food, and it is also most important for them to have as many breeding seasons as possible in order to maximize reproductive success. On the other hand, males can adopt a riskier strategy in the hopes of gaining as much mass as possible, and thus being able to have one extremely successful breeding season.
Molting
Once a year, elephant seals go through a process called molting where they shed the outer layer of hair and skin. This molting process takes up to a month to complete. When it comes time to molt, they will haul out on land to shed their outer layer, and will not consume any food during this time. The females and juveniles will molt first, followed by the sub adult males, and finally the large mature males.
Predators
The main predators of elephant seals are killer whales and great white sharks. Cookiecutter sharks can take bites from their skin.
Milk stealing
Sheathbills, Skuas, Western Gulls, and African feral cats have been reported to steal milk from the elephant seals' teats.
Status
The IUCN lists both species of elephant seal as being of least concern, although they are still threatened by entanglement in marine debris, fishery interactions, and boat collisions. Though a complete population count of elephant seals is not possible because all age classes are not ashore at the same time, a 2005 study of the California breeding stock estimated approximately 124,000 individuals. The animal is protected in most countries where it lives. In Mexico, the northern elephant seal is protected in the Guadalupe Island Biosphere Reserve where it was rediscovered after being believed to be extinct.
Gallery
| Biology and health sciences | Pinnipeds | Animals |
404582 | https://en.wikipedia.org/wiki/Well-formed%20formula | Well-formed formula | In mathematical logic, propositional logic and predicate logic, a well-formed formula, abbreviated WFF or wff, often simply formula, is a finite sequence of symbols from a given alphabet that is part of a formal language.
The abbreviation wff is pronounced "woof", or sometimes "wiff", "weff", or "whiff".
A formal language can be identified with the set of formulas in the language. A formula is a syntactic object that can be given a semantic meaning by means of an interpretation. Two key uses of formulas are in propositional logic and predicate logic.
Introduction
A key use of formulas is in propositional logic and predicate logic such as first-order logic. In those contexts, a formula is a string of symbols φ for which it makes sense to ask "is φ true?", once any free variables in φ have been instantiated. In formal logic, proofs can be represented by sequences of formulas with certain properties, and the final formula in the sequence is what is proven.
Although the term "formula" may be used for written marks (for instance, on a piece of paper or chalkboard), it is more precisely understood as the sequence of symbols being expressed, with the marks being a token instance of formula. This distinction between the vague notion of "property" and the inductively-defined notion of well-formed formula has roots in Weyl's 1910 paper "Uber die Definitionen der mathematischen Grundbegriffe". Thus the same formula may be written more than once, and a formula might in principle be so long that it cannot be written at all within the physical universe.
Formulas themselves are syntactic objects. They are given meanings by interpretations. For example, in a propositional formula, each propositional variable may be interpreted as a concrete proposition, so that the overall formula expresses a relationship between these propositions. A formula need not be interpreted, however, to be considered solely as a formula.
Propositional calculus
The formulas of propositional calculus, also called propositional formulas, are expressions such as . Their definition begins with the arbitrary choice of a set V of propositional variables. The alphabet consists of the letters in V along with the symbols for the propositional connectives and parentheses "(" and ")", all of which are assumed to not be in V. The formulas will be certain expressions (that is, strings of symbols) over this alphabet.
The formulas are inductively defined as follows:
Each propositional variable is, on its own, a formula.
If φ is a formula, then ¬φ is a formula.
If φ and ψ are formulas, and • is any binary connective, then ( φ • ψ) is a formula. Here • could be (but is not limited to) the usual operators ∨, ∧, →, or ↔.
This definition can also be written as a formal grammar in Backus–Naur form, provided the set of variables is finite:
Using this grammar, the sequence of symbols
(((p → q) ∧ (r → s)) ∨ (¬q ∧ ¬s))
is a formula, because it is grammatically correct. The sequence of symbols
((p → q)→(qq))p))
is not a formula, because it does not conform to the grammar.
A complex formula may be difficult to read, owing to, for example, the proliferation of parentheses. To alleviate this last phenomenon, precedence rules (akin to the standard mathematical order of operations) are assumed among the operators, making some operators more binding than others. For example, assuming the precedence (from most binding to least binding) 1. ¬ 2. → 3. ∧ 4. ∨. Then the formula
(((p → q) ∧ (r → s)) ∨ (¬q ∧ ¬s))
may be abbreviated as
p → q ∧ r → s ∨ ¬q ∧ ¬s
This is, however, only a convention used to simplify the written representation of a formula. If the precedence was assumed, for example, to be left-right associative, in following order: 1. ¬ 2. ∧ 3. ∨ 4. →, then the same formula above (without parentheses) would be rewritten as
(p → (q ∧ r)) → (s ∨ (¬q ∧ ¬s))
Predicate logic
The definition of a formula in first-order logic is relative to the signature of the theory at hand. This signature specifies the constant symbols, predicate symbols, and function symbols of the theory at hand, along with the arities of the function and predicate symbols.
The definition of a formula comes in several parts. First, the set of terms is defined recursively. Terms, informally, are expressions that represent objects from the domain of discourse.
Any variable is a term.
Any constant symbol from the signature is a term
an expression of the form f(t1,...,tn), where f is an n-ary function symbol, and t1,...,tn are terms, is again a term.
The next step is to define the atomic formulas.
If t1 and t2 are terms then t1=t2 is an atomic formula
If R is an n-ary predicate symbol, and t1,...,tn are terms, then R(t1,...,tn) is an atomic formula
Finally, the set of formulas is defined to be the smallest set containing the set of atomic formulas such that the following holds:
is a formula when is a formula
and are formulas when and are formulas;
is a formula when is a variable and is a formula;
is a formula when is a variable and is a formula (alternatively, could be defined as an abbreviation for ).
If a formula has no occurrences of or , for any variable , then it is called quantifier-free. An existential formula is a formula starting with a sequence of existential quantification followed by a quantifier-free formula.
Atomic and open formulas
An atomic formula is a formula that contains no logical connectives nor quantifiers, or equivalently a formula that has no strict subformulas.
The precise form of atomic formulas depends on the formal system under consideration; for propositional logic, for example, the atomic formulas are the propositional variables. For predicate logic, the atoms are predicate symbols together with their arguments, each argument being a term.
According to some terminology, an open formula is formed by combining atomic formulas using only logical connectives, to the exclusion of quantifiers. This is not to be confused with a formula which is not closed.
Closed formulas
A closed formula, also ground formula or sentence, is a formula in which there are no free occurrences of any variable. If A is a formula of a first-order language in which the variables have free occurrences, then A preceded by is a universal closure of A.
Properties applicable to formulas
A formula A in a language is valid if it is true for every interpretation of .
A formula A in a language is satisfiable if it is true for some interpretation of .
A formula A of the language of arithmetic is decidable if it represents a decidable set, i.e. if there is an effective method which, given a substitution of the free variables of A, says that either the resulting instance of A is provable or its negation is.
Usage of the terminology
In earlier works on mathematical logic (e.g. by Church), formulas referred to any strings of symbols and among these strings, well-formed formulas were the strings that followed the formation rules of (correct) formulas.
Several authors simply say formula. Modern usages (especially in the context of computer science with mathematical software such as model checkers, automated theorem provers, interactive theorem provers) tend to retain of the notion of formula only the algebraic concept and to leave the question of well-formedness, i.e. of the concrete string representation of formulas (using this or that symbol for connectives and quantifiers, using this or that parenthesizing convention, using Polish or infix notation, etc.) as a mere notational problem.
The expression "well-formed formulas" (WFF) also crept into popular culture. WFF is part of an esoteric pun used in the name of the academic game "WFF 'N PROOF: The Game of Modern Logic", by Layman Allen, developed while he was at Yale Law School (he was later a professor at the University of Michigan). The suite of games is designed to teach the principles of symbolic logic to children (in Polish notation). Its name is an echo of whiffenpoof, a nonsense word used as a cheer at Yale University made popular in The Whiffenpoof Song and The Whiffenpoofs.
| Mathematics | Mathematical logic | null |
404646 | https://en.wikipedia.org/wiki/Farsightedness | Farsightedness | Far-sightedness, also known as long-sightedness, hypermetropia, and hyperopia, is a condition of the eye where distant objects are seen clearly but near objects appear blurred. This blur is due to incoming light being focused behind, instead of on, the retina due to insufficient accommodation by the lens. Minor hypermetropia in young patients is usually corrected by their accommodation, without any defects in vision. But, due to this accommodative effort for distant vision, people may complain of eye strain during prolonged reading. If the hypermetropia is high, there will be defective vision for both distance and near. People may also experience accommodative dysfunction, binocular dysfunction, amblyopia, and strabismus. Newborns are almost invariably hypermetropic, but it gradually decreases as the newborn gets older.
There are many causes for this condition. It may occur when the axial length of eyeball is too short or if the lens or cornea is flatter than normal. Changes in refractive index of lens, alterations in position of the lens or absence of lens are the other main causes. Risk factors include a family history of the condition, diabetes, certain medications, and tumors around the eye. It is a type of refractive error. Diagnosis is based on an eye exam.
Management can occur with eyeglasses, contact lenses, or refractive corneal surgeries. Glasses are easiest while contact lenses can provide a wider field of vision. Surgery works by changing the shape of the cornea. Far-sightedness primarily affects young children, with rates of 8% at 6 years old and 1% at 15 years old. It then becomes more common again after the age of 40, known as presbyopia, affecting about half of people. The best treatment option to correct hypermetropia due to aphakia is IOL implantation.
Other common types of refractive errors are near-sightedness, astigmatism, and presbyopia.
Signs and symptoms
In young patients, mild hypermetropia may not produce any symptoms. The signs and symptoms of far-sightedness include blurry vision, frontal or fronto temporal headaches, eye strain, tiredness of eyes, etc. The common symptom is eye strain. Difficulty seeing with both eyes (binocular vision) may occur, as well as difficulty with depth perception. The asthenopic symptoms and near blur are usually seen after close work, especially in the evening or night.
Complications
Far-sightedness can have rare complications such as strabismus and amblyopia. At a young age, severe far-sightedness can cause the child to have double vision as a result of "over-focusing".
Hypermetropic patients with short axial length are at higher risk of developing primary angle closure glaucoma, so routine gonioscopy and glaucoma evaluation is recommended for all hypermetropic adults.
Causes
Simple hypermetropia, the most common form of hypermetropia, is caused by normal biological variations in the development of eyeball. Aetiologically, causes of hypermetropia can be classified as:
Axial: Axial hypermetropia occur when the axial length of eyeball is too short. About 1 mm decrease in axial length cause 3 diopters of hypermetropia. One condition that cause axial hypermetropia is nanophthalmos.
Curvatural: Curvatural hypermetropia occur when curvature of lens or cornea is flatter than normal. About 1 mm increase in radius of curvature results in 6 diopters of hypermetropia. Cornea is flatter in microcornea and cornea plana.
Index: Age related changes in refractive index (cortical sclerosis) can cause hypermetropia. Another cause of index hypermetropia is diabetes. Occasionally, mild hypermetropic shift may be seen in association with cortical or subcapsular cataract also.
Positional: Positional hypermetropia occur due to posterior dislocation of Lens or IOL. It may occur due to trauma.
Consecutive: Consecutive hypermetropia occur due to surgical over correction of myopia or surgical under correction in cataract surgery.
Functional: Functional hypermetropia results from paralysis of accommodation as seen in internal ophthalmoplegia, CN III palsy etc.
Absence of lens: Congenital or acquired aphakia cause high degree hypermetropia.
Far-sightedness is often present from birth, but children have a very flexible eye lens, which helps to compensate. In rare instances, hyperopia can be due to diabetes, as well as problems with the blood vessels in the retina.
Diagnosis
A diagnosis of far-sightedness is made by utilizing either a retinoscope or an automated refractor-objective refraction; or trial lenses in a trial frame or a phoropter to obtain a subjective examination.
Ancillary tests for abnormal structures and physiology can be made via a slit lamp test, which examines the cornea, conjunctiva, anterior chamber, and iris.
In severe cases of hyperopia from birth, the brain has difficulty in merging the images that each individual eye sees. This is because the images the brain receives from each eye are always blurred. A child with severe hyperopia can never see objects in detail. If the brain never learns to see objects in detail, then there is a high chance of one eye becoming dominant. The result is that the brain will block the impulses of the non-dominant eye. In contrast, the child with myopia can see objects close to the eye in detail and does learn at an early age to see objects in detail.
Classification
Hyperopia is typically classified according to clinical appearance, its severity, or how it relates to the eye's accommodative status.
Clinical classification
There are three clinical categories of hyperopia.
Simple hyperopia: Occurs naturally due to biological diversity.
Pathological hyperopia: Caused by disease, trauma, or abnormal development.
Functional hyperopia: Caused by paralysis that interferes eye's ability to accommodate.
Classification according to severity
There are also three categories severity:
Low: Refractive error less than or equal to +2.00 diopters (D).
Moderate: Refractive error greater than +2.00 D up to +5.00 D.
High: Refractive error greater than +5.00 D.
Components of hypermetropia
Accommodation has significant role in hyperopia. Considering accommodative status, hyperopia can be classified as:
Total hypermetropia: It is the total amount of hyperopia which is obtained after complete relaxation of accommodation using cycloplegics like atropine.
Latent hyperopia: It is the amount of hyperopia normally corrected by ciliary tone (approximately 1 diopter).
Manifest hyperopia: It is the amount of hyperopia not corrected by ciliary tone. Manifest hyperopia is further classified into two, facultative and absolute.
Facultative hyperopia: It is the part of hyperopia corrected by patient's accommodation.
Absolute hyperopia: It is the residual part of hyperopia which causes blurring of vision for distance.
So, Total hyperopia= latent hyperopia + manifest hyperopia (facultative + absolute)
Treatment
Corrective lenses
The simplest form of treatment for far-sightedness is the use of corrective lenses, i.e. eyeglasses or contact lenses. Eyeglasses used to correct far-sightedness have convex lenses.
Surgery
There are also surgical treatments for far-sightedness:
Laser procedures
Photorefractive keratectomy (PRK): This is a refractive technique that is done by removal of a minimal amount of the corneal surface. Hyperopic PRK has many complications like regression effect, astigmatism due to epithelial healing, and corneal haze. Post operative epithelial healing time is also more for PRK.
Laser assisted in situ keratomileusis (LASIK): Laser eye surgery to reshape the cornea, so that glasses or contact lenses are no longer needed. Excimer laser LASIK can correct hypermetropia up to +6 diopters. LASIK is contraindicated in patients with lupus and rheumatoid arthritis.
Laser epithelial keratomileusis (LASEK): Resembles PRK, but uses alcohol to loosen the corneal surface.
Epi-LASIK: Epi-LASIK is also used to correct hyperopia. In this procedure, use of epikeratome eliminates the use of alcohol.
Laser thermal keratoplasty (LTK): Laser thermal keratoplasty is a laser based non-destructive refractive procedure used to correct hyperopia and presbyopia. It uses Thallium-Holmium-Chromium (THC): YAG laser.
IOL implantation
Aphakia correction: High degree hypermetropia due to absence of lens (aphakia) is best corrected using intraocular lens implantation.
Refractive lens exchange (RLE): A variation of cataract surgery where the natural crystalline lens is replaced with an artificial intraocular lens; the difference is the existence of abnormal ocular anatomy which causes a high refractive error.
Phakic IOL: Phakic intraocular lens are lenses that implanted inside eye without removing the normal crystalline lens. Phakic IOLs can be used to correct hypermetropia up to +20 diopters.
Non laser procedures
Conductive keratoplasty (CK): Conductive keratoplasty is a non laser refractive procedure used to correct presbyopia and low hypermetropia (+0.75D to +3.25D) with or without astigmatism (up to 0.75D). It uses radiofrequency energy to heat and shrink corneal collagen tissue. CK is contraindicated in pregnant/breastfeeding women, central corneal dystrophies and scarring, history of herpetic keratitis, type 1 diabetes etc.
Automated lamellar keratoplasty (ALK): Hyperopic automated lamellar keratoplasty (H-ALK) and Homoplastic ALK are ALK procedures that corrects low to moderate hyperopia. Poor predictability and the risk of complications limits usefulness of these procedures.
Keratophakia and epi-keratophakia are another two non laser surgical procedures used to correct hypermetropia. Keratophakia is a surgical technique developed by Barraquer for treating high hypermetropia and aphakia. Poor predictability and induced irregular astigmatism are complications of these procedures.
Etymology
The term hyperopia comes from Greek ὑπέρ hyper "over" and ὤψ ōps "sight" (GEN ὠπός ōpos).
| Biology and health sciences | Disabilities | Health |
405421 | https://en.wikipedia.org/wiki/Ariel%20%28moon%29 | Ariel (moon) | Ariel is the fourth-largest moon of Uranus. Ariel orbits and rotates in the equatorial plane of Uranus, which is almost perpendicular to the orbit of Uranus, so the moon has an extreme seasonal cycle.
It was discovered on 24 October 1851 by William Lassell and named for a character in two different pieces of literature. As of 2019, much of the detailed knowledge of Ariel derives from a single flyby of Uranus performed by the space probe Voyager 2 in 1986, which managed to image around 35% of the moon's surface. There are no active plans at present to return to study the moon in more detail, although various concepts such as a Uranus Orbiter and Probe have been proposed.
After Miranda, Ariel is the second-closest of Uranus's five major rounded satellites. Among the smallest of the Solar System's 20 known spherical moons (it ranks 14th among them in diameter), it is believed to be composed of roughly equal parts ice and rocky material. Its mass is approximately equal in magnitude to Earth's hydrosphere.
Like all of Uranus's moons, Ariel probably formed from an accretion disc that surrounded the planet shortly after its formation, and, like other large moons, it is likely differentiated, with an inner core of rock surrounded by a mantle of ice. Ariel has a complex surface consisting of extensive cratered terrain cross-cut by a system of scarps, canyons, and ridges. The surface shows signs of more recent geological activity than other Uranian moons, most likely due to tidal heating.
Discovery and name
Discovered on 24 October 1851 by William Lassell, it is named for a sky spirit in Alexander Pope's 1712 poem The Rape of the Lock and Shakespeare's The Tempest.
Both Ariel and the slightly larger Uranian satellite Umbriel were discovered by William Lassell on 24 October 1851. Although William Herschel, who discovered Uranus's two largest moons Titania and Oberon in 1787, claimed to have observed four additional moons, this was never confirmed and those four objects are now thought to be spurious.
All of Uranus's moons are named after characters from the works of William Shakespeare or Alexander Pope's The Rape of the Lock. The names of all four satellites of Uranus then known were suggested by John Herschel in 1852 at the request of Lassell, though it is uncertain if Herschel devised the names, or if Lassell did so and then sought Herschel's permission. Ariel is named after the leading sylph in The Rape of the Lock. It is also the name of the spirit who serves Prospero in Shakespeare's The Tempest. The moon is also designated Uranus I.
Orbit
Among Uranus's five major moons, Ariel is the second closest to the planet, orbiting at the distance of about 190,000 km. Its orbit has a small eccentricity and is inclined very little relative to the equator of Uranus. Its orbital period is around 2.5 Earth days, coincident with its rotational period. This means that one side of the moon always faces the planet; a condition known as tidal lock. Ariel's orbit lies completely inside the Uranian magnetosphere. The trailing hemispheres (those facing away from their directions of orbit) of airless satellites orbiting inside a magnetosphere like Ariel are struck by magnetospheric plasma co-rotating with the planet. This bombardment may lead to the darkening of the trailing hemispheres observed for all Uranian moons except Oberon (see below). Ariel also captures magnetospheric charged particles, producing a pronounced dip in energetic particle count near the moon's orbit observed by Voyager 2 in 1986.
Because Ariel, like Uranus, orbits the Sun almost on its side relative to its rotation, its northern and southern hemispheres face either directly towards or directly away from the Sun at the solstices. This means it is subject to an extreme seasonal cycle; just as Earth's poles see permanent night or daylight around the solstices, Ariel's poles see permanent night or daylight for half a Uranian year (42 Earth years), with the Sun rising close to the zenith over one of the poles at each solstice. The Voyager 2 flyby coincided with the 1986 southern summer solstice, when nearly the entire northern hemisphere was dark. Once every 42 years, when Uranus has an equinox and its equatorial plane intersects the Earth, mutual occultations of Uranus's moons become possible. A number of such events occurred in 2007–2008, including an occultation of Ariel by Umbriel on 19 August 2007.
Currently Ariel is not involved in any orbital resonance with other Uranian satellites. In the past, however, it may have been in a 5:3 resonance with Miranda, which could have been partially responsible for the heating of that moon (although the maximum heating attributable to a former 1:3 resonance of Umbriel with Miranda was likely about three times greater). Ariel may have once been locked in the 4:1 resonance with Titania, from which it later escaped. Escape from a mean motion resonance is much easier for the moons of Uranus than for those of Jupiter or Saturn, due to Uranus's lesser degree of oblateness. This resonance, which was likely encountered about 3.8 billion years ago, would have increased Ariel's orbital eccentricity, resulting in tidal friction due to time-varying tidal forces from Uranus. This would have caused warming of the moon's interior by as much as 20 K.
Composition and internal structure
Ariel is the fourth-largest of the Uranian moons by size and mass. It is also the 14th-largest moon in the Solar System. The moon's density is 1.52 g/cm3, which indicates that it consists of roughly equal parts water ice and a dense non-ice component. The latter could consist of rock and carbonaceous material including heavy organic compounds known as tholins. The presence of water ice is supported by infrared spectroscopic observations, which have revealed crystalline water ice on the surface of the moon, which is porous and thus transmits little solar heat to layers below. Water ice absorption bands are stronger on Ariel's leading hemisphere than on its trailing hemisphere. The cause of this asymmetry is not known, but it may be related to bombardment by charged particles from Uranus's magnetosphere, which is stronger on the trailing hemisphere (due to the plasma's co-rotation). The energetic particles tend to sputter water ice, decompose methane trapped in ice as clathrate hydrate and darken other organics, leaving a dark, carbon-rich residue behind.
Except for water, two other compounds have been identified on the surface of Ariel by infrared spectroscopy. The first is carbon dioxide (CO2), which is concentrated mainly on its trailing hemisphere. Ariel shows the strongest spectroscopic evidence for CO2 of any Uranian satellite, and was the first Uranian satellite on which this compound was discovered. The origin of the carbon dioxide is not completely clear. It might be produced locally from carbonates or organic materials under the influence of the energetic charged particles coming from Uranus's magnetosphere or solar ultraviolet radiation. This hypothesis would explain the asymmetry in its distribution, as the trailing hemisphere is subject to a more intense magnetospheric influence than the leading hemisphere. Another possible source is the outgassing of primordial CO2 trapped by water ice in Ariel's interior. The escape of CO2 from the interior may be related to past geological activity on this moon.
The second compound identified by its feature at wavelength of 2.2 μm on Ariel is ammonia, which is distributed more or less homogeneously over the surface. The presence of ammonia may indicate that Ariel was geologically active in recent past.
Given its size, rock/ice composition and the possible presence of salt or ammonia in solution to lower the freezing point of water, Ariel's interior may be differentiated into a rocky core surrounded by an icy mantle. If this is the case, the radius of the core (372 km) is about 64% of the radius of the moon, and its mass is around 56% of the moon's mass—the parameters are dictated by the moon's composition. The pressure in the center of Ariel is about 0.3 GPa (3 kbar). The current state of the icy mantle is unclear. The existence of a subsurface ocean is currently considered possible, though a 2006 study suggests that radiogenic heating alone would not be enough to allow for one. More scientific research concluded that an active underwater ocean is possible for the 4 largest moons of Uranus.
Surface
Albedo and color
Ariel is the most reflective of Uranus's moons. Its surface shows an opposition surge: the reflectivity decreases from 53% at a phase angle of 0° (geometrical albedo) to 35% at an angle of about 1°. The Bond albedo of Ariel is about 23%—the highest among Uranian satellites. The surface of Ariel is generally neutral in color. There may be an asymmetry between the leading and trailing hemispheres; the latter appears to be redder than the former by 2%. Ariel's surface generally does not demonstrate any correlation between albedo and geology on one hand and color on the other hand. For instance, canyons have the same color as the cratered terrain. However, bright impact deposits around some fresh craters are slightly bluer in color. There are also some slightly blue spots, which do not correspond to any known surface features.
Surface features
The observed surface of Ariel can be divided into three terrain types: cratered terrain, ridged terrain, and plains. The main surface features are impact craters, canyons, fault scarps, ridges, and troughs.
The cratered terrain, a rolling surface covered by numerous impact craters and centered on Ariel's south pole, is the moon's oldest and most geographically extensive geological unit. It is intersected by a network of scarps, canyons (graben), and narrow ridges mainly occurring in Ariel's mid-southern latitudes. The canyons, known as chasmata, probably represent graben formed by extensional faulting, which resulted from global tensional stresses caused by the freezing of water (or aqueous ammonia) in the moon's interior (see below). They are 15–50 km wide and trend mainly in an east- or northeasterly direction. The floors of many canyons are convex; rising up by 1–2 km. Sometimes the floors are separated from the walls of canyons by grooves (troughs) about 1 km wide. The widest graben have grooves running along the crests of their convex floors, which are called valles. The longest canyon is Kachina Chasma, at over 620 km in length (the feature extends into the hemisphere of Ariel that Voyager 2 did not see illuminated).
The second main terrain type—ridged terrain—comprises bands of ridges and troughs hundreds of kilometers in extent. It bounds the cratered terrain and cuts it into polygons. Within each band, which can be up to 25 to 70 km wide, are individual ridges and troughs up to 200 km long and between 10 and 35 km apart. The bands of ridged terrain often form continuations of canyons, suggesting that they may be a modified form of the graben or the result of a different reaction of the crust to the same extensional stresses, such as brittle failure.
The youngest terrain observed on Ariel are the plains: relatively low-lying smooth areas that must have formed over a long period of time, judging by their varying levels of cratering. The plains are found on the floors of canyons and in a few irregular depressions in the middle of the cratered terrain. In the latter case they are separated from the cratered terrain by sharp boundaries, which in some cases have a lobate pattern. The most likely origin for the plains is through volcanic processes; their linear vent geometry, resembling terrestrial shield volcanoes, and distinct topographic margins suggest that the erupted liquid was very viscous, possibly a supercooled water/ammonia solution, with solid ice volcanism also a possibility. The thickness of these hypothetical cryolava flows is estimated at 1–3 km. The canyons must therefore have formed at a time when endogenic resurfacing was still taking place on Ariel. A few of these areas appear to be less than 100 million years old, suggesting that Ariel may still be geologically active in spite of its relatively small size and lack of current tidal heating.
Ariel appears to be fairly evenly cratered compared to other moons of Uranus; the relative paucity of large craters suggests that its surface does not date to the Solar System's formation, which means that Ariel must have been completely resurfaced at some point of its history. Ariel's past geologic activity is believed to have been driven by tidal heating at a time when its orbit was more eccentric than currently. The largest crater observed on Ariel, Yangoor, is only 78 km across, and shows signs of subsequent deformation. All large craters on Ariel have flat floors and central peaks, and few of the craters are surrounded by bright ejecta deposits. Many craters are polygonal, indicating that their appearance was influenced by the preexisting crustal structure. In the cratered plains there are a few large (about 100 km in diameter) light patches that may be degraded impact craters. If this is the case they would be similar to palimpsests on Jupiter's moon Ganymede. It has been suggested that a circular depression 245 km in diameter located at 10°S 30°E is a large, highly degraded impact structure.
Origin and evolution
Ariel is thought to have formed from an accretion disc or subnebula; a disc of gas and dust that either existed around Uranus for some time after its formation or was created by the giant impact that most likely gave Uranus its large obliquity. The precise composition of the subnebula is not known; however, the higher density of Uranian moons compared to the moons of Saturn indicates that it may have been relatively water-poor. Significant amounts of carbon and nitrogen may have been present in the form of carbon monoxide (CO) and molecular nitrogen (N2), instead of methane and ammonia. The moons that formed in such a subnebula would contain less water ice (with CO and N2 trapped as clathrate) and more rock, explaining the higher density.
The accretion process probably lasted for several thousand years before the moon was fully formed. Models suggest that impacts accompanying accretion caused heating of Ariel's outer layer, reaching a maximum temperature of around 195 K at a depth of about 31 km. After the end of formation, the subsurface layer cooled, while the interior of Ariel heated due to decay of radioactive elements present in its rocks. The cooling near-surface layer contracted, while the interior expanded. This caused strong extensional stresses in the moon's crust reaching estimates of 30 MPa, which may have led to cracking. Some present-day scarps and canyons may be a result of this process, which lasted for about 200 million years.
The initial accretional heating together with continued decay of radioactive elements and likely tidal heating may have led to melting of the ice if an antifreeze like ammonia (in the form of ammonia hydrate) or some salt was present. The melting may have led to the separation of ice from rocks and formation of a rocky core surrounded by an icy mantle. A layer of liquid water (ocean) rich in dissolved ammonia may have formed at the core–mantle boundary. The eutectic temperature of this mixture is 176 K. The ocean, however, is likely to have frozen long ago. The freezing of the water likely led to the expansion of the interior, which may have been responsible for the formation of the canyons and obliteration of the ancient surface. The liquids from the ocean may have been able to erupt to the surface, flooding floors of canyons in the process known as cryovolcanism. More recent analysis concluded that an active ocean is probable for the 4 largest moons of Uranus; specifically including Ariel.
Thermal modeling of Saturn's moon Dione, which is similar to Ariel in size, density, and surface temperature, suggests that solid state convection could have lasted in Ariel's interior for billions of years, and that temperatures in excess of 173 K (the melting point of aqueous ammonia) may have persisted near its surface for several hundred million years after formation, and near a billion years closer to the core.
Observation and exploration
The apparent magnitude of Ariel is 14.8; similar to that of Pluto near perihelion. However, while Pluto can be seen through a telescope of 30 cm aperture, Ariel, due to its proximity to Uranus's glare, is often not visible to telescopes of 40 cm aperture.
The only close-up images of Ariel were obtained by the Voyager 2 probe, which photographed the moon during its flyby of Uranus in January 1986. The closest approach of Voyager 2 to Ariel was —significantly less than the distances to all other Uranian moons except Miranda. The best images of Ariel have a spatial resolution of about 2 km. They cover about 40% of the surface, but only 35% was photographed with the quality required for geological mapping and crater counting. At the time of the flyby, the southern hemisphere of Ariel (like those of the other moons) was pointed towards the Sun, so the northern (dark) hemisphere could not be studied. No other spacecraft has ever visited the Uranian system. The possibility of sending the Cassini spacecraft to Uranus was evaluated during its mission extension planning phase. It would have taken about twenty years to get to the Uranian system after departing Saturn, and these plans were scrapped in favour of remaining at Saturn and eventually destroying the spacecraft in Saturn's atmosphere.
Transits
On 26 July 2006, the Hubble Space Telescope captured a rare transit made by Ariel on Uranus, which cast a shadow that could be seen on the Uranian cloud tops. Such events are rare and only occur around equinoxes, as the moon's orbital plane about Uranus is tilted 98° to Uranus's orbital plane about the Sun. Another transit, in 2008, was recorded by the European Southern Observatory.
| Physical sciences | Solar System | Astronomy |
405429 | https://en.wikipedia.org/wiki/Murray%20cod | Murray cod | The Murray cod (Maccullochella peelii) is a large Australian predatory freshwater fish of the genus Maccullochella in the family Percichthyidae. Although the species is called a cod in the vernacular, it is not related to the Northern Hemisphere marine cod (Gadus) species. The Murray cod is an important part of Australia's vertebrate wildlife—as an apex predator in the Murray-Darling River system—and also significant in Australia's human culture. The Murray cod is the largest exclusively freshwater fish in Australia, and one of the largest in the world. Other common names for Murray cod include cod, greenfish, goodoo, Mary River cod, Murray perch, ponde, pondi and Queensland freshwater cod.
The scientific name of Murray cod derives from an early Australian fish researcher Allan Riverstone McCulloch and the river from which the explorer Major Mitchell first scientifically described the species, the Peel River. This was for a number of years changed to M. peelii peelii to differentiate Murray cod from Mary River cod, which were designated as a subspecies of Murray cod. However, as of 2010, Mary River cod have been raised to full species status (M. mariensis), thus Murray cod have reverted simply to M. peelii.
Murray cod populations have declined severely since European colonisation of Australia due to a number of causes, including severe overfishing, river regulation, and habitat degradation and are now a listed threatened species. However, they once inhabited almost the entire Murray-Darling basin, Australia's largest river system, in very great numbers.
A long-lived fish, adult Murray cod are carnivorous and eat crustaceans (shrimp, yabbies, crays), fish and freshwater mussels. The species exhibits a high degree of parental care for their eggs, which are spawned in the spring and are generally laid in hollow logs or on other hard surfaces. Murray cod are a popular angling target and aquaculture species. Often available through the aquarium trade, they are also a popular aquarium species in Australia.
Description
The Murray cod is a large grouper-like fish with a deep, elongated body that is round in cross section. It has a broad, scooped head, and a large mouth lined with pads of very small, needle-like teeth. The jaws of the Murray cod are equal, or the lower jaw protrudes slightly.
The spiny dorsal fin of Murray cod is moderate to low in height and is partially separated by a notch from the high, rounded soft dorsal fin. Soft dorsal, anal, and caudal (tail) fins are all large and rounded, and are dusky grey or black with distinct white edges. The large, rounded pectoral fins are usually similar in colour to the flanks. The pelvic fins are large, angular, and set forward of the pectoral fins. The leading white-coloured rays on the pelvic fins split into two trailing white filaments, while the pelvic fins themselves are usually a translucent white or cream, tending toward opacity in large fish.
Murray cod are white to cream on their ventral (belly) surfaces. Their backs and flanks are usually yellowish-green to green, overlain with heavy darker green, but occasionally brown or black, mottling. The effect is a marbled appearance sometimes reminiscent of a leopard's markings. Colouration is related to water clarity; colouration is intense in fish from clear water habitats. Small to medium-sized Murray cod from clear-water habitats often have striking and very distinct colouration. Very large fish tend towards a speckled grey-green colouration.
Size
Murray cod are large fish, with adult fish regularly reaching in length. Murray cod are capable of growing well over in length and the largest on record was over and about in weight. Large breeding fish are rare in most wild populations today due to overfishing.
Related species
Murray cod continue a pattern present in Murray-Darling native fish genera of speciation into lowland and specialist upland species: Murray cod are the primarily lowland species and the endangered trout cod are the specialist upland species. The pattern is slightly blurred in the cod species because, being adaptable and successful fish, Murray cod push significant distances into upland habitats, while the now endangered trout cod stray (or did stray, before their decline) well down the upland/lowland transition zone, which can be extensive in Murray-Darling Rivers. Nevertheless, the basic pattern of speciation into a primarily lowland species and a specialist upland species is present.
Murray cod, like a number of other Murray-Darling native fish species, have also managed to cross the Great Dividing Range at least once through natural river capture events, leading to several species and subspecies of coastal cod. The best known are eastern freshwater cod of the Clarence River system in northern New South Wales, and Mary River cod of the Mary River system in south eastern Queensland, both of which are endangered, but survive today. Coastal cod were also found in the Richmond River system in northern New South Wales and the Brisbane River system in southern Queensland, but are now extinct.
Taxonomy
In Mitchell's original description, he classified the fish as "Family, Percidae; Genus, Acerina; Subgenus, Gristes, Cuv. or Growler; Species, Gristes peelii mihi, or Cod-perch", observing "This fish may be identical with the fish described by MM. Cuvier and Valenciennes Volume 3 page 45 under the name of Gristes macquariensis: but it differs from their description…".
In the 1800s and early 1900s, commercial fishermen, recreational fishermen, riverside residents, and some fisheries scientists (e.g. Anderson, Stead, Langtry) distinctly recognised two species of cod in the southern Murray-Darling basin, Murray cod and trout cod or "blue nose cod". Taxonomically however, confusion abounded. Ignoring glaring differences in size at sexual maturity, and via some rather unscientific reasoning, some prominent fisheries scientists (e.g. Whitley) insisted on recognising only one species of cod—the Murray cod (then named Maccullochella macquariensis, after an early Australian fish researcher with the surname McCulloch and the Macquarie River in New South Wales where the holotype was captured). Then, as trout cod declined into near extinction over the 1900s, the distinction between the two species was further eroded and finally questioned. In the 1970s, early genetic techniques confirmed that trout cod were a separate species and further showed that the original "Murray cod" specimen was in fact a trout cod. Following the rules of scientific classification, the name M. macquariensis remained with the original specimen, now known to be the trout cod, and a new name, M. peelii, for the Peel River where the new holotype was captured, was coined for the Murray cod. Subsequently, two further cod were identified as separate species, the eastern freshwater cod (M. ikei) and the Mary River cod (M. mariensis).
Range
The Murray cod is named after the Murray River, part of the Murray-Darling basin in eastern Australia, Australia's largest and most important river system, draining around 14% of the continent. The Murray cod's natural range encompasses virtually the whole Murray-Darling basin, particularly the lowland areas, and extending well into upland areas — to about elevation in the southern half of the basin and to about in the northern half of the basin.
Consequently, Murray cod inhabit a remarkably wide variety of habitats, from cool, clear, fast-flowing streams with riffle-and-pool structure and rocky substrates in upland areas to large, slow flowing, meandering rivers in the extensive alluvial lowland reaches of the Murray-Darling basin.
Murray cod have died out in many of their upland habitats, particularly in the southern Murray-Darling basin, due to a combination of overfishing, siltation, dams and weirs blocking migration, pollution from arsenic-based sheep-dips, mining, and in some cases, introduced trout stockings, which causes competition between juvenile Murray cod and introduced trout species.
Murray Cod have also been introduced into other drainage basins, such as the Cooper Basin in Queensland.
Age
Murray cod are very long-lived, which is characteristic of many freshwater native fish in Australia. Longevity is a survival strategy in variable Australian environment to ensure that most adults participate in at least one exceptional spawning and recruitment event, which are often linked to unusually wet La Niña years and may only occur every one or two decades. Murray cod are the most long-lived freshwater native fish in Australia. The oldest Murray cod aged yet was 48 years of age, and the even larger specimens of years past leave little doubt that the species can reach considerably greater ages, of 70 years or more.
Diet
The Murray cod is the apex aquatic predator in the rivers of the Murray-Darling basin, and will eat almost anything smaller than itself, including finned fishes such as smaller Murray cod, golden perch, silver perch, bony bream, eel-tailed catfish, western carp gudgeon, and Australian smelt and introduced fish such as carp, goldfish, and redfin (English perch), as well as crustaceans such as yabbies, freshwater shrimp, and Murray crayfish. Fish are eaten when abundant by mature Murray cod in lowland river and impoundment habitats but crustaceans tend to dominate the diet under natural conditions, and freshwater mussels were commonly eaten in the past. Murray cod have also been known to eat ducks, cormorants, freshwater turtles, water dragons, snakes, mice, and frogs. The observations of the recreational fishermen fishing for Murray cod with surface lures at night reveal that the popular description of Murray cod as a demersal ambush predator is only partially correct. While this behaviour is typical during the day, at night, Murray cod are active pelagic predators, venturing into shallow waters and frequently taking prey from the surface.
Reproduction
Murray cod reach sexual maturity between four and six years of age, generally five years. Sexual maturity in Murray cod is dependent on age. Therefore, roughly 70% of wild river Murray cod, with their slower growth rate, have reached sexual maturity by in length. Wild Murray cod in impoundments like Lake Mulwala, with their faster growth rates, do not reach sexual maturity until they are well over in length. These data strongly indicate the 50-cm (20-in) size limit for Murray cod is inadequate and should be increased substantially to allow for a greater chance of reproduction before capture.
Large female Murray cod in the 15– to 35-kg (35– to 80-lb) range are the most important breeders because they produce the most eggs and for other reasons; large females in most fish species are also important because they produce larger larvae with larger yolk sacs, and are also more experienced breeders that display optimal breeding behaviours. Such large females may also have valuable, successful genes to pass on. All of these factors mean the spawnings of large female fish have far higher larval survival rates and make far greater reproductive contributions than the spawnings of small female fish. Not surprisingly, there is no truth to claim made by some recreational fishers that "large Murray cod don't breed".
Female Murray cod, upon first reaching sexual maturity, have egg counts of no more than 10,000. Very large female Murray cod can have egg counts as high as 80,000–90,000, although a recent, very large 33-kg specimen yielded an egg count of 110,000 viable eggs. Egg counts in female Murray cod of all sizes are relatively low compared to many fish species.
Murray cod spawn in spring, cued by rising water temperatures and increasing photoperiod (daylight length). Initially, fish biologists working with Murray cod considered spring floods and temperatures of to be necessary and that spring flooding is critical for successful recruitment (i.e. survival to juvenile stages) of young cod by providing an influx of pelagic zooplankton and early life-stage macroinvertebrates off the flood plain into the main river channel for first feeding, but more recent research has shown Murray cod breed annually, with or without spring floods, and at temperatures as low as . Additionally, recent research has shown abundant epibenthic/epiphytic (bottom dwelling/edge clinging) prey in unflooded lowland rivers, traits in Murray cod larvae that should allow survival in a variety of challenging conditions, and a significant proportion of Murray cod larvae feeding successfully in unflooded rivers.
Latest research has also shown that Murray cod in fact live their entire lifecycle within the main channel of the stream. Earlier ideas that Murray cod spawn on floodplains, or the larvae feed on floodplains, are incorrect. Murray cod breed in the main river channel or, in times of spring flood, the inundated upper portion of the main channel and tributary channels, but not on floodplains. Murray cod larvae feed within the main river channel or, in times of spring flood, on the inundated upper portion of the main channel and the channel/floodplain boundary, but not on the floodplain.
Spawning is sometimes preceded by upstream or downstream movements. Radio-tracked Murray cod in the Murray River have moved up to upstream to spawn, before returning to exactly the same snag from where they departed, an unusual homing behaviour in a freshwater fish. Decades of observations by recreational and commercial fishermen suggest such spring spawning movements are common across the Murray cod's geographical range. Spawning is initiated by pairing up and courtship rituals. During the courtship ritual a spawning site is selected and cleaned — hard surfaces such as rocks in upland rivers and impoundments, and logs and occasionally clay banks in lowland rivers, at a depth of , are selected. The female lays the large adhesive eggs as a mat on the spawning surface, which the male fertilises. The female then leaves the spawning site. The male remains to guard the eggs during incubation, which takes six to 10 days (depending on water temperature), and to guard the hatched larvae for a further week or so until they disperse. Larvae disperse from the nest site by drifting in river currents at night, and continue this behaviour around four to seven days. During this dispersal process, larvae simultaneously absorb the remainder of their yolk sac and begin to feed on small, early life-stage macroinvertebrates and epibenthic/epiphytic (bottom dwelling/edge clinging) microinvertebrates. It may be that Murray cod are the first freshwater fish identified as having long-term pair-bonding in its repertoire of mating strategies in the wild.
The relationship between river flows and Murray cod recruitment are more complex than first thought, and in less regulated rivers, Murray cod may be able to recruit under a range of conditions including stable low flows. (Significant recruitment of Murray cod in low-flow conditions in less regulated lowland rivers has now been proven.) This information also suggests that nonriver-regulation-related causes of degradation are playing a larger role in the survival and recruitment of Murray cod larvae than first thought; competition from extremely large numbers of invasive carp larvae are negatively affecting the survival and recruitment of Murray cod larvae to a much greater degree than first thought; and that decades of overfishing is playing a far larger role in the current state of Murray cod stocks, through depletion of spawning adults, than first thought.
These findings do not mean that river regulation and water extraction have not had adverse effects on fish stocks. Rather, river regulation has been a major factor in the decline of Murray cod and other native fish. Thermal pollution is also a major problem, evidence indicates strong Murray cod recruitment events (which may be important for sustaining Murray cod populations over the long term) can result from spring flooding, and the health of Australian lowland river ecosystems generally rely on periodic spring flooding. Also, due to the regulation of most of the rivers in the Murray-Darling River system, mainly for irrigation purposes, only exceptional spring floods manage to "break free". The long-term viability of wild Murray cod, other native fish species and river ecosystems, in the face of this fact, are of great concern.
Conservation
History
Murray cod were originally the most common large native fish in the Murray-Darling basin. Contrary to some fishery department literature, the first serious declines in Murray cod were caused by overfishing. In the latter half of the 1800s and the early 1900s, Murray cod were caught in large numbers by both commercial and recreational fishermen. For example, one commercial fishing operation commenced on the Murray River near Echuca in 1855, targeting Murray cod over hundreds of kilometres of river, and yet within eight years, grave concerns over the sustainability of this operation, and complaints about the near-absence of Murray cod in their heavily fished grounds, were being raised in the main state newspaper, The Argus. Yet fishing effort continued to increase in the region, so in the late 1880s and early 1890s, between 40,000 and 150,000 kg of mostly Murray cod (between 7,500 and 27,000 fish, at an average weight of 5.5 kg) were caught near Echuca. Similarly, in 1883, more than 147,000 kg of Murray cod were sent to Melbourne from just one river town (Moama). By the 1920s Murray cod had been overfished to the point where large-scale commercial fishing operations were no longer feasible. Recreational fishermen took similarly excessive hauls during this era, using rods and reels, handlines, setlines, drum nets, gill nets, and even explosives, with hauls often either wasted or illegally sold. Perhaps this extreme overfishing and its impacts of wild Murray cod stocks is best summarised by a short article in the Register News (a South Australian newspaper) in 1929:
In [the last] 29 years 26,214,502 lbs (nearly 11,703 tons) [11,915,683 kg] of Murray cod has been eaten by the people of Melbourne. The Superintendent of Markets (Mr G. B. Minns) included these figures in a statement he made today pointing out that the supply was declining. In 1918, the peak year, was received at the market, but since 1921, when was sent to Melbourne, supply has decreased. Last year [1928] it was only .
Twenty years later, the aquatic ecologist J. O. Langtry criticised the heavy fishing pressure, in the form of both uncontrolled small-scale commercial fishing and rampant illegal fishing, which he found in all reaches of the Murray River he investigated 1949–1950.
A thorough reading of historical newspaper articles and historical government reports reveals that the history of wild Murray cod between the mid–1800s and the mid–1900s was one of citizen agitation, government inaction, and ongoing stock decline. For decades, riverside residents, commercial fishermen, recreational fishermen, local fisheries inspectors, fish retailers, and others agitated in newspapers and other fora about the declining Murray cod stocks, to be met in turn either with government denials, or conversely, with various ineffective inquiries into Murray cod stocks and fisheries, and various ineffective control measures. Debate about excessive fishing pressure, number of fishermen, number of nets, net mesh size, bag limits, minimum size limits and take of small cod, closed seasons and the taking of spawning cod full of eggs during spring, and other sundry issues, continued without resolution. Fishing regulations were either not amended, or amended and largely unenforced and completely ignored. Heavy commercial, recreational and illegal fishing pressure continued. The end result was a Murray cod population, initially abundant, continually fished down until in the early to mid 20th century a number of other factors such as river regulation (listed below) emerged to drive the species even further into decline. All of these drivers of decline left this iconic Australian fish in a perilous situation. There are now concerns for the long-term survival of wild Murray cod populations.
Status
Since 3 July 2003 and , the Murray cod is listed as a vulnerable species under the EPBC Act (Environment Protection and Biodiversity Conservation Act 1999). It is listed as a species of Least Concern on the IUCN Red List of Threatened Species, but under state legislation in both South Australia and Victoria, it is an endangered species.
A study published in Biological Conservation in March 2023 listed 23 species which the authors considered to no longer meet the criteria as threatened species under the EPBC Act, including the Murray cod. The team, led by John Woinarski of Charles Darwin University looked at all species listed as threatened under the act in 2000 and 2022. The Murray cod was the only fish on the list, and the reason for their assessment was given as "Actual recovery over the period 2000–2022, from long period of decline".
Threats
Overfishing
While extremely severe commercial and recreational overfishing in the 1800s and the early 1900s caused the first strong declines of Murray cod, overfishing by recreational fishermen, aided by inadequate fishing regulations, continues today and remains an extremely serious threat to Murray cod. The current size limit of 60 centimetres in most states is inadequate now that scientific studies have documented average size at sexual maturity in Murray cod. This and catch data and computer modelling exercises on wild Murray cod stocks indicate measures such as raising the size limit to 70 centimetres and reducing the bag and possession limits from 2 and 4 fish respectively to 1 fish are urgently needed to maintain the long-term viability of wild Murray cod populations. As of November 2014, the NSW Department of Fisheries has introduced a maximum size limit of 75 cm for Murray Cod to provide protection for large breeding fish, as well as a new minimum size limit of 55 cm.
Although angler effects are sometimes disregarded in the overall picture today, recent population studies have shown that while all year classes are well represented up to the minimum legal angling size (now 60 centimetres in most states), above that size, numbers of fish are dramatically reduced almost to the point of non-existence in many waters. Some emphasis has been made of the results of two small surveys which suggested a majority of Murray cod are released by anglers. However, there are valid questions as to the representativeness of these surveys: these surveys do not explain the dramatic disappearance of large numbers of young Murray cod at exactly the minimum size limit, and most importantly, any emphasis on these surveys miss the fundamental point — as a large, long-lived species with relatively low fecundity and delayed sexual maturity wild Murray cod populations are extremely vulnerable to overfishing, even with only modest angler-kill. A tightening of fishing regulations for wild Murray cod, as referred to above, and a switch by fishermen to a largely catch and release approach for wild Murray cod would alleviate this problem. Recognising these issues, in late 2014 the New South Wales and Victorian fishery departments amended their regulations so that a slot limit of 55 to 75 cm now applies in these states. (i.e. only Murray cod between 55 and 75 cm may be taken; those above and below this size range or "slot" must be released.) This measure should have positive effects for the Murray cod population by protecting and increasing the proportion of large breeding Murray cod.
Another issue is that Murray cod caught and released in winter, while developing their eggs, or in spring prior to spawning, resorb their eggs and do not spawn. This may be a minor issue compared to some of the other threats facing Murray cod, nevertheless, concerned fishermen try to avoid catching wild Murray cod at these times. At this point in time a closed season is in place for the spring spawning period, during which anglers are not allowed to target Murray cod, even on a catch and release basis.
Effects of river regulation
The Murray River and southern tributaries originally displayed a pattern of high flows in winter, high flows and floods in spring, and low flows in summer and autumn. The breeding of Murray cod and other Murray-Darling native fish was adapted to these natural flow patterns. River regulation for irrigation has reversed these natural flow patterns, with negative effects on the breeding and recruitment of Murray cod. The Murray and most southern tributaries now experience high irrigation flows in summer and autumn and low flows in winter and spring. Small and medium floods including the once annual spring flood-pulse have been completely eliminated.
It is estimated that flows at the river mouth by 1995 had declined to only 27% of natural outflows. The probability of the bottom end of the Murray experiencing drought-like flows had increased from 5% under natural conditions to 60% by 1995.
Thermal pollution is the artificial reduction in water temperatures, especially in summer and autumn, caused when frigid water is released from the bottom of reservoirs for irrigation demands. Such temperature suppression typically extends several hundred kilometres downstream. Thermal pollution inhibits both the breeding of Murray cod and the survival of Murray cod larvae, and in extreme cases inhibits even the survival of adult Murray cod.
The rare floods that do break free of the dams and weirs of the Murray-Darling system have their magnitude and duration deliberately curtailed by river regulators. Increasing research indicates this management practice is very harmful and drastically reduces the general ecosystem benefits and breeding and recruitment opportunities for Murray cod and other Murray-Darling native fish species these now rare floods can provide.
Blackwater events
Blackwater events are emerging as a very serious threat to wild Murray cod stocks in lowland river reaches. Blackwater events occur when floodplains and ephemeral channels accumulate large quantities of leaf litter over a number of years and are then finally inundated in a flood event. The leaf litter releases large quantities of dissolved organic carbon, turning the water a characteristic black colour and inducing a temporary explosion in bacterial numbers and activity, which in turn consume dissolved oxygen, reducing them to levels harmful or fatal to fish. (Fish essentially asphyxiate.) Water temperature is a critical regulator of blackwater events as warmer water temperatures increase bacterial activity and markedly reduce the intrinsic oxygen carrying capacity of water; events that may be tolerable for fish in winter or early spring may be catastrophic in late spring or summer due to the increase in water temperature.
Blackwater events are often described as "natural" events—while there are some historical records of relatively severe events in smaller, more ephemeral systems (e.g. lower Lachlan, upper Darling), there is no record of severe events in the Murray River and its largest southern tributaries before water extraction and river regulation. In the Murray and large southern tributaries, very severe large-scale blackwater events are a relatively new but recurring phenomenon and appear to be an effect of river regulation curtailing the winter/spring flood events that formerly swept leaf litter away annually, exacerbated by long-term declines in rainfall and recurring prolonged drought events.
Flood events in 2010 and 2012 following the prolonged Millennium Drought (1997–2009) induced very severe blackwater events; while formal studies of these events were limited due to the relatively rapid response times required and logistical difficulties,
angler photographs and observations of extraordinary numbers of dead Murray cod during these events and plunging catch rates after these events show they induced extremely heavy Murray cod mortalities along extensive tracts of the Murray River.
Physical barriers to fish movement
Dams, weirs and other instream barriers block the migration of adult and juvenile Murray cod and prevent recolonisation of habitats and maintenance of isolated populations. Additionally, recent study has proven approximately 50% of Murray cod larvae are killed when they pass through undershot weirs.
Habitat degradation / siltation
Hundreds of thousands, perhaps more than a million, submerged timber "snags", mainly River Red Gum, have been removed from lowland reaches of the Murray-Darling basin over the past 150 years. The removal of such a vast number of snags has had devastating impacts on Murray cod and river ecosystems. Snags are critical habitats and spawning sites for Murray cod. Snags are also critical for the functioning of lowland river ecosystems — as one of the few hard substrates in lowland river channels composed of fine silts snags are crucial sites for biofilm growth, macroinvertebrate grazing and general in-stream productivity.
Vegetation clearing and cattle trampling river banks create severe siltation, which fill in pools, degrade river ecosystems and make rivers and streams uninhabitable for Murray cod. This is exacerbated by removal of riparian (riverbank) vegetation which causes siltation and degrades river ecosystems in many ways.
Introduced carp
There is serious competition for food between larval/early juvenile introduced carp and larval/early juvenile native fish. Introduced carp dominate the fish faunas of lowland Murray-Darling rivers; the sheer amount of biomass carp now take up, and the large numbers of larvae carp produce, causes serious negative effects on river ecosystems and native fish. Carp are the main vector of the introduced Lernaea parasite (Lernaea cyprinacea) and serious vectors of the introduced Asian fish tapeworm (Bothriocephalus acheilognathi).
Introduced pathogens
Murray cod have soft skin and very fine scales that leave them particularly vulnerable to infection from exotic diseases and parasites. The following exotic diseases and parasites all seriously affect wild Murray cod; all have been introduced by imports of exotic fish. Chilodonella is a single-celled, parasitic protozoa that infects the skin of Murray cod and has caused a number of serious kills of wild Murray cod. Saprolegnia is a fungus-like oomycete or "water mould" that frequently infects Murray cod eggs and the skin of Murray cod that have been roughly handled through poor catch and release technique. (It is essential that Murray cod intended for release only touch cool wet surfaces and are not put down on any hard, dry, rough or hot surfaces, e.g. boat gunwales, boat floors, dry grass, dry rocks, gravel banks, dry towels or mats, etc. Hands should also be wetted before touching them. They must not be hung vertically by the mouth or gill covers.) Wild Murray cod populations across their range suffer extremely severe infestations of Lernaea or "anchor worm", a parasitic copepod vectored by introduced carp and that burrows into the skin of Murray cod. Lernaea puncture wounds are often secondarily infected by bacteria. Severe Lernaea infestations probably causes the death of many more adult Murray cod than commonly recognised. Ebner reports a young adult Murray cod seemingly killed by severe Lernaea infestation.
Conservation measures
State government fisheries departments support Murray cod populations by stocking with hatchery-bred fish, especially in man-made lakes. Important issues affecting restoration of cod populations, such as the need for spring floods and excessive angler take, are slowly being acknowledged but are yet to be definitely addressed. Other concerns such as the stocking of Murray cod in areas where trout cod (M. macquariensis) are recovering encourages hybridisation and needs consideration for future restocking programs.
Relationship with humans
Murray cod play a very important role in the mythology of many Aboriginal tribes in the Murray-Darling basin, and for some tribes, particularly those living along the Murray River, Murray cod were the icon species. The myths of these tribes describe the creation of the Murray River by a gigantic Murray cod fleeing down a small creek to escape from a renowned hunter. In these myths, the fleeing Murray cod enlarges the river and the beating of its tail create the bends in it. The cod is eventually speared near the terminus of the Murray River, chopped into pieces, and the pieces thrown back into the river. The pieces become all the other fish species of the river. The cod's head is kept intact, told to "keep being Murray cod", and also thrown back into the river.
Murray cod continue to play important roles in the culture of First Nations Peoples along the Murray and Darling rivers and the Murray-Darling Basin overall.
Aquaculture
The Murray cod is renowned as a good-tasting fish for eating. In recent years, despite the decline in the wild population, farmed fish are being harvested.
It has long been known that Murray cod could be translocated to impounded water. In the 1850s, landholder Terence Aubrey Murray stocked a large and beautiful billabong—Murray's Lagoon just north of Lake George—with Murray cod fished out of the Molonglo River at Murray's other property of Yarralumla. At some time the billabong overflowed and introduced Murray cod into the slightly brackish lake itself. They bred rapidly, and, from the 1850s to the 1890s, Lake George abounded with them. Due to the lengthy Federation Drought, by 1902, the lake dried out completely. In their search for water to survive in, the Murray cod flocked into the mouths of the few small creeks feeding the lake and died there by the thousands.
Farming Murray cod uses fishmeal and fish oil as food, but the species has a better ‘wild fish in to farmed fish out’ ratio than other farmed species such as rainbow trout and Atlantic salmon. It commands a premium price compared to those species. It is increasingly farmed, in large dams holding water used for irrigation of farmland, where the presence of effluent produced by the fish is not a problem
| Biology and health sciences | Acanthomorpha | Animals |
405532 | https://en.wikipedia.org/wiki/W%20and%20Z%20bosons | W and Z bosons | In particle physics, the W and Z bosons are vector bosons that are together known as the weak bosons or more generally as the intermediate vector bosons. These elementary particles mediate the weak interaction; the respective symbols are , , and . The bosons have either a positive or negative electric charge of 1 elementary charge and are each other's antiparticles. The boson is electrically neutral and is its own antiparticle. The three particles each have a spin of 1. The bosons have a magnetic moment, but the has none. All three of these particles are very short-lived, with a half-life of about . Their experimental discovery was pivotal in establishing what is now called the Standard Model of particle physics.
The bosons are named after the weak force. The physicist Steven Weinberg named the additional particle the " particle", and later gave the explanation that it was the last additional particle needed by the model. The bosons had already been named, and the bosons were named for having zero electric charge.
The two bosons are verified mediators of neutrino absorption and emission. During these processes, the boson charge induces electron or positron emission or absorption, thus causing nuclear transmutation.
The boson mediates the transfer of momentum, spin and energy when neutrinos scatter elastically from matter (a process which conserves charge). Such behavior is almost as common as inelastic neutrino interactions and may be observed in bubble chambers upon irradiation with neutrino beams. The boson is not involved in the absorption or emission of electrons or positrons. Whenever an electron is observed as a new free particle, suddenly moving with kinetic energy, it is inferred to be a result of a neutrino interacting with the electron (with the momentum transfer via the Z boson) since this behavior happens more often when the neutrino beam is present. In this process, the neutrino simply strikes the electron (via exchange of a boson) and then scatters away from it, transferring some of the neutrino's momentum to the electron.
Basic properties
These bosons are among the heavyweights of the elementary particles. With masses of and , respectively, the and bosons are almost 80 times as massive as the proton – heavier, even, than entire iron atoms.
Their high masses limit the range of the weak interaction. By way of contrast, the photon is the force carrier of the electromagnetic force and has zero mass, consistent with the infinite range of electromagnetism; the hypothetical graviton is also expected to have zero mass. (Although gluons are also presumed to have zero mass, the range of the strong nuclear force is limited for different reasons; see Color confinement.)
All three bosons have particle spin s = 1. The emission of a or boson either lowers or raises the electric charge of the emitting particle by one unit, and also alters the spin by one unit. At the same time, the emission or absorption of a boson can change the type of the particle – for example changing a strange quark into an up quark. The neutral Z boson cannot change the electric charge of any particle, nor can it change any other of the so-called "charges" (such as strangeness, baryon number, charm, etc.). The emission or absorption of a boson can only change the spin, momentum, and energy of the other particle. ( | Physical sciences | Bosons | null |
405548 | https://en.wikipedia.org/wiki/Caecilian | Caecilian | Caecilians (; ) are a group of limbless, vermiform (worm-shaped) or serpentine (snake-shaped) amphibians with small or sometimes nonexistent eyes. They mostly live hidden in soil or in streambeds, and this cryptic lifestyle renders caecilians among the least familiar amphibians. Modern caecilians live in the tropics of South and Central America, Africa, and southern Asia. Caecilians feed on small subterranean creatures such as earthworms. The body is cylindrical and often darkly coloured, and the skull is bullet-shaped and strongly built. Caecilian heads have several unique adaptations, including fused cranial and jaw bones, a two-part system of jaw muscles, and a chemosensory tentacle in front of the eye. The skin is slimy and bears ringlike markings or grooves and may contain scales.
Modern caecilians are a clade, the order Gymnophiona (or Apoda ), one of the three living amphibian groups alongside Anura (frogs) and Urodela (salamanders). Gymnophiona is a crown group, encompassing all modern caecilians and all descendants of their last common ancestor. There are more than 220 living species of caecilian classified in 10 families. Gymnophionomorpha is a recently coined name for the corresponding total group which includes Gymnophiona as well as a few extinct stem-group caecilians (extinct amphibians whose closest living relatives are caecilians but are not descended from any caecilian). Some palaeontologists have used the name Gymnophiona for the total group and the old name Apoda for the crown group. However, Apoda has other even older uses, including as the name of a genus of butterfly, making its use potentially confusing and best avoided. 'Gymnophiona' derives from the Greek words / and / , as the caecilians were originally thought to be related to snakes and to lack scales.
The study of caecilian evolution is complicated by their poor fossil record and specialized anatomy. Genetic evidence and some anatomical details (such as pedicellate teeth) support the idea that frogs, salamanders, and caecilians (collectively known as lissamphibians) are each other's closest relatives. Frogs and salamanders show many similarities to dissorophoids, a group of extinct amphibians in the order Temnospondyli. Caecilians are more controversial; many studies extend dissorophoid ancestry to caecilians. Some studies have instead argued that caecilians descend from extinct lepospondyl or stereospondyl amphibians, contradicting evidence for lissamphibian monophyly (common ancestry). Rare fossils of early gymnophionans such as Eocaecilia and Funcusvermis have helped to test the various conflicting hypotheses for the relationships between caecilians and other living and extinct amphibians.
Description
Caecilians' anatomy is highly adapted for a burrowing lifestyle. In a couple of species belonging to the primitive genus Ichthyophis vestigial traces of limbs have been found, and in Typhlonectes compressicauda the presence of limb buds has been observed during embryonic development, remnants in an otherwise completely limbless body.
This makes the smaller species resemble worms, while the larger species like Caecilia thompsoni, with lengths up to , resemble snakes. Their tails are short or absent, and their cloacae are near the ends of their bodies.
Except for one lungless species, Atretochoana eiselti, all caecilians have lungs, but also use their skin or mouths for oxygen absorption. Often, the left lung is much smaller than the right one, an adaptation to body shape that is also found in snakes.
Their trunk muscles are adapted to pushing their way through the ground, with the vertebral column and its musculature acting as a piston inside the outer layer of the body wall musculature, which is closely attached to the skin. By contracting the outer layer of muscles it squeezes the coelom and generates a strong hydrostatic force that lengthens the body. This muscle system allows the animal to anchor its hind end in position, and force the head forwards, and then pull the rest of the body up to reach it in waves. In water or very loose mud, caecilians instead swim in an eel-like fashion. Caecilians in the family Typhlonectidae are aquatic, and the largest of their kind. The representatives of this family have a fleshy fin running along the rear section of their bodies, which enhances propulsion in water.
Skull and senses
Caecilians have small or absent eyes, with only a single known class of photoreceptors, and their vision is limited to dark-light perception. Unlike other modern amphibians (frogs and salamanders) the skull is compact and solid, with few large openings between plate-like cranial bones. The snout is pointed and bullet-shaped, used to force their way through soil or mud. In most species the mouth is recessed under the head, so that the snout overhangs the mouth.
The bones in the skull are reduced in number compared to prehistoric amphibian species. Many bones of the skull are fused together: the maxilla and palatine bones have fused into a maxillopalatine in all living caecilians, and the nasal and premaxilla bones fuse into a nasopremaxilla in some families. Some families can be differentiated by the presence or absence of certain skull bones, such as the septomaxillae, prefrontals, an/or a postfrontal-like bone surrounding the orbit (eye socket). The braincase is encased in a fully integrated compound bone called the os basale, which takes up most of the rear and lower parts of the skull. In skulls viewed from above, a mesethmoid bone may be visible in some species, wedging into the midline of the skull roof.
All caecilians have a pair of unique sensory structures, known as tentacles, located on either side of the head between the eyes and nostrils. These are probably used for a second olfactory capability, in addition to the normal sense of smell based in the nose.
The ringed caecilian (Siphonops annulatus) has dental glands that may be homologous to the venom glands of some snakes and lizards. The function of these glands is unknown.
The middle ear consists of only the stapes bone and the oval window, which transfer vibrations into the inner ear through a reentrant fluid circuit as seen in some reptiles. Adults of species within the family Scolecomorphidae lack both a stapes and an oval window, making them the only known amphibians missing all the components of a middle ear apparatus.
The lower jaw is specialized in caecilians. Gymnophionans, including extinct species, have only two components of the jaw: the pseudodentary (at the front, bearing teeth) and pseudoangular (at the back, bearing the jaw joint and muscle attachments). These two components are what remains following fusion between a larger set of bones. An additional inset tooth row with up to 20 teeth lies parallel to the main marginal tooth row of the jaw.
All but the most primitive caecilians have two sets of muscles for closing the jaw, compared with the single pair found in other amphibians. One set of muscles, the adductors, insert into the upper edge of the pseudoangular in front of the jaw joint. Adductor muscles are commonplace in vertebrates, and close the jaw by pulling upwards and forwards. A more unique set of muscles, the abductors, insert into the rear edge of the pseudoangular below and behind the jaw joint. They close the jaw by pulling backwards and downwards. Jaw muscles are more highly developed in the most efficient burrowers among the caecilians, and appear to help keep the skull and jaw rigid.
Skin
Their skin is smooth and usually dark, but some species have colourful skins. Inside the skin are calcite scales. Because of these scales, the caecilians were once thought to be related to the fossil Stegocephalia, but they are now believed to be a secondary development, and the two groups are most likely unrelated. Scales are absent in the families Scolecomorphidae and Typhlonectidae, except the species Typhlonectes compressicauda where minute scales have been found in the hinder region of the body. The skin also has numerous ring-shaped folds, or annuli, that partially encircle the body, giving them a segmented appearance. Like some other living amphibians, the skin contains glands that secrete a toxin to deter predators. The skin secretions of Siphonops paulensis have been shown to have hemolytic properties.
Milk provisioning
Recent research, as documented in the journal Science, has shed light on the behavior of certain species of caecilians. These studies reveal that some caecilians exhibit a phenomenon wherein they provide their hatchlings with a nutrient-rich substance akin to milk, delivered through a maternal vent. Among the species investigated, the oviparous nonmammalian caecilian amphibian Siphonops annulatus stood out, indicating that the practice of lactation may be more widespread among these creatures than previously thought. As detailed in a 2024 study, researchers collected 16 mothers of the Siphonops annulatus species from cacao plantations in Brazil's Atlantic Forest and filmed them with their altricial hatchlings in the lab. The mothers remained with their offspring, which suckled on a white, viscous liquid from their cloaca, experiencing rapid growth in their first week. This milk-like substance, rich in fats and carbohydrates, is produced in the mother's oviduct epithelium's hypertrophied glands, similar to mammal milk. The substance was released seemingly in response to tactile and acoustic stimulation by the babies. The researchers observed the hatchlings emitting high-pitched clicking sounds as they approached their mothers for milk, a behavior unique among amphibians. This milk-feeding behavior may contribute to the development of the hatchlings' microbiome and immune system, similar to mammalian young. The presence of milk production in caecilians that lay eggs suggests an evolutionary transition between egg-laying and live birth.
Distribution
Caecilians are native to wet, tropical regions of Southeast Asia, India, Bangladesh, Nepal and Sri Lanka, parts of East and West Africa, the Seychelles Islands in the Indian Ocean, Central America, and in northern and eastern South America. In Africa, caecilians are found from Guinea-Bissau (Geotrypetes) to southern Malawi (Scolecomorphus), with an unconfirmed record from eastern Zimbabwe. They have not been recorded from the extensive areas of tropical forest in central Africa. In South America, they extend through subtropical eastern Brazil well into temperate northern Argentina. They can be seen as far south as Buenos Aires, when they are carried by the flood waters of the Paraná River coming from farther north. Their American range extends north to southern Mexico. The northernmost distribution is of the species Ichthyophis sikkimensis of northern India. Ichthyophis is also found in South China and Northern Vietnam. In Southeast Asia, they are found as far east as Java, Borneo, and the southern Philippines, but they have not crossed Wallace's line and are not present in Australia or nearby islands. There are no known caecilians in Madagascar, but their presence in the Seychelles and India has led to speculation on the presence of undiscovered extinct or extant caecilians there.
In 2021, a live specimen of Typhlonectes natans, a caecilian native to Colombia and Venezuela, was collected from a drainage canal in South Florida. It was the only caecilian ever reported in the wild in the United States, and is considered to be an introduction, perhaps from the wildlife trade. Whether a breeding population has been established in the area is unknown.
Taxonomy
The name caecilian derives from the Latin word caecus, meaning "blind", referring to the small or sometimes nonexistent eyes. The name dates back to the taxonomic name of the first species described by Carl Linnaeus, which he named Caecilia tentaculata.
There has historically been disagreement over the use of the two primary scientific names for caecilians, Apoda and Gymnophiona. Some palaeontologists prefer to use the name Apoda to refer to the "crown group", that is, the group containing all modern caecilians and extinct members of these modern lineages and the name Gymnophiona to refer to the total group, that is, all caecilians and caecilian-like amphibians that are more closely related to modern groups than to frogs or salamanders. However, Apoda been used for groups of fishes and of sea cucumbers and is the name of a genus of moth, and its continued use in caecilian taxonomy is potentially confusing and unhelpful.
A classification of caecilians by Wilkinson et al. (2011) divided the living caecilians into 9 families containing nearly 200 species. In 2012, a tenth caecilian family was newly described, Chikilidae. This classification is based on a thorough definition of monophyly based on morphological and molecular evidence, and it solves the longstanding problems of paraphyly of the Caeciliidae in previous classifications without an exclusive reliance upon synonymy. There are 219 species of caecilian in 33 genera and 10 families.
The most recent phylogeny of caecilians is based on molecular mitogenomic evidence examined by San Mauro et al. (2014), and modified to include some more recently described genera such as Amazops.
Evolution
Little is known of the evolutionary history of the caecilians, which have left a very sparse fossil record. The first fossil, a vertebra dated to the Paleocene, was not discovered until 1972. Other vertebrae, which have characteristic features unique to modern species, were later found in Paleocene and Late Cretaceous (Cenomanian) sediments.
Phylogenetic evidence suggests that the ancestors of caecilians and batrachians (including frogs and salamanders) diverged from one another during the Carboniferous. This leaves a gap of more than 70 million years between the presumed origins of caecilians and the earliest definitive fossils of stem-caecilians.
Prior to 2023, the earliest fossil attributed to a stem-caecilian (an amphibian closer to caecilians than to frogs or salamanders but not a member of the extant caecilian lineage) comes from the Jurassic period. This primitive genus, Eocaecilia, had small limbs and well-developed eyes. In their 2008 description of the Early Permian amphibian Gerobatrachus, Anderson and co-authors suggested that caecilians arose from the Lepospondyl group of ancestral tetrapods, and may be more closely related to amniotes than to frogs and salamanders, which arose from Temnospondyl ancestors. Numerous groups of lepospondyls evolved reduced limbs, elongated bodies, and burrowing behaviors, and morphological studies on Permian and Carboniferous lepospondyls have placed the early caecilian (Eocaecilia) among these groups. Divergent origins of caecilians and other extant amphibians may help explain the slight discrepancy between fossil dates for the origins of modern Amphibia, which suggest Permian origins, and the earlier dates, in the Carboniferous, predicted by some molecular clock studies of DNA sequences. Most morphological and molecular studies of extant amphibians, however, support monophyly for caecilians, frogs, and salamanders, and the most recent molecular study based on multi-locus data suggest a Late Carboniferous–Early Permian origin of extant amphibians.
Chinlestegophis, a stereospondyl temnospondyl from the Late Triassic Chinle Formation of Colorado, was proposed to be a stem-caecilian in a 2017 paper by Pardo and co-authors. If confirmed, this would bolster the proposed pre-Triassic origin of Lissamphibia suggested by molecular clocks. It would fill a gap in the fossil record of early caecilians and suggest that stereospondyls as a whole qualify as stem-group caecilians. However, affinities between Chinlestegophis and gymnophionans have been disputed along several lines of evidence. A 2020 study questioned the choice of characters supporting the relationship, and a 2019 reanalysis of the original data matrix found that other equally parsimonious positions were supported for the placement of Chinlestegophis and gymnophionans among tetrapods. In 2024, Chinlestegophis was consistently recovered as a sister taxon of Rileymillerus within various positions of Stereospondyli outside Lissamphibia based on phylogenetic analyses and revisions.
A 2023 paper by Kligman and co-authors described Funcusvermis, another amphibian from the Chinle Formation of Arizona. Funcusvermis was strongly supported as a stem group caecilian based on traits of its numerous skull and jaw fragments, the largest sample of caecilian fossils known. The paper discussed the various hypotheses for caecilian origins: the polyphyly hypothesis (caecilians as lepospondyls, and other lissamphibians as temnospondyls), the lepospondyl hypothesis (lissamphibians as lepospondyls), and the newer hypothesis supported by Chinlestegophis, where caecilians and other lissamphibians had separate origins within temnospondyls. Nevertheless, all of these ideas were refuted, and the most strongly supported hypothesis combined lissamphibians into a monophyletic group of dissorophoid temnospondyls closely related to Gerobatrachus.
Behavior
Reproduction
Caecilians are the only order of amphibians to use internal insemination exclusively (although most salamanders have internal fertilization and the tailed frog in the US uses a tail-like appendage for internal insemination in its fast-flowing water environment). The male caecilians have a long tube-like intromittent organ, the phallodeum, which is inserted into the cloaca of the female for two to three hours. About 25% of the species are oviparous (egg-laying); the eggs are laid in terrestrial nests rather than in water and are guarded by the female. For some species, the young caecilians are already metamorphosed when they hatch; others hatch as larvae. The larvae are not fully aquatic, but spend the daytime in the soil near the water.
About 75% of caecilians are viviparous, meaning they give birth to already-developed offspring. The foetus is fed inside the female with cells lining the oviduct, which they eat with special scraping teeth. Some larvae, such as those of Typhlonectes, are born with enormous external gills which are shed almost immediately.
The egg-laying herpelid species Boulengerula taitana feeds its young by developing an outer layer of skin, high in fat and other nutrients, which the young peel off with modified teeth. This allows them to grow by up to 10 times their own weight in a week. The skin is consumed every three days, the time it takes for a new layer to grow, and the young have only been observed to eat it at night. It was formerly thought that the juveniles subsisted only on a liquid secretion from their mothers. This form of parental care, known as maternal dermatophagy, has also been reported in two species in the family Siphonopidae: Siphonops annulatus and Microcaecilia dermatophaga. Siphonopids and herpelids are not closely related to each other, having diverged in the Cretaceous Period. The presence of maternal dermatophagy in both families suggest that it may be more widespread among caecilians than previously considered.
Herpele squalostoma caecilians vertically transmit the mother's microbiome to their offspring through maternal dermatophagy. In comparison to other amphibians, the extended parenting of caecilians can provide beneficial bacteria and fungi, but this transmission risks the spread of diseases like chytridiomycosis.
Diet
Caecilians are considered as generalist predators. While caecilians are generally carnivorous, their diet differs between taxa. The stomach contents of wild caecilians include primarily soil ecosystem engineers like
earthworms, termites, lizards, moth larvae, and shrimp. Some species of caecilians will opportunistically consume newborn rodents, salmon eggs, and veal in laboratory conditions, as well as vertebrates such as scolecophidian snakes, lizards, small fish, and frogs.
Cultural significance
As caecilians are a reclusive group, they are only featured in a few human myths, and are generally considered repulsive in traditional customs.
In the folklore of certain regions of India, caecilians are feared and reviled, based on the belief that they are fatally venomous. Caecilians in the Eastern Himalayas are colloquially known as "back ache snakes", while in the Western Ghats, Ichthyophis tricolor is considered to be more toxic than a king cobra. Despite deep cultural respect for the cobra and other dangerous animals, the caecilian is killed on sight by salt and kerosene. These myths have complicated conservation initiatives for Indian caecilians.
Crotaphatrema lamottei, a rare species native to Mount Oku in Cameroon, is classified as a Kefa-ntie (burrowing creature) by the Oku. Kefa-ntie, a term also encompassing native moles and blind snakes, are considered poisonous, causing painful sores if encountered, contacted, or killed. According to Oku tradition, the ceremony to cleanse the affliction involves a potion composed of ground herbs, palm oil, snail shells, and chicken blood applied to and licked off of the left thumb.
South American caecilians have a variable relationship to local cultures. The minhocão, a legendary worm-like beast in Brazilian folklore, may be inspired by caecilians. Colombian folklore states that the aquatic caecilian, Typhlonectes natans, can be manifested from a lock of hair sealed in a sunken bottle. In southern Mexico and Central America, Dermophis mexicanus is colloquially known as the "tapalcua", a name referencing the belief that it emerges to embed itself in the rear end of any unsuspecting person who chooses to relieve themself over its home. This may be inspired by their tendency to nest in refuse heaps.
| Biology and health sciences | Amphibians | null |
405766 | https://en.wikipedia.org/wiki/Linear%20combination%20of%20atomic%20orbitals | Linear combination of atomic orbitals | A linear combination of atomic orbitals or LCAO is a quantum superposition of atomic orbitals and a technique for calculating molecular orbitals in quantum chemistry. In quantum mechanics, electron configurations of atoms are described as wavefunctions. In a mathematical sense, these wave functions are the basis set of functions, the basis functions, which describe the electrons of a given atom. In chemical reactions, orbital wavefunctions are modified, i.e. the electron cloud shape is changed, according to the type of atoms participating in the chemical bond.
It was introduced in 1929 by Sir John Lennard-Jones with the description of bonding in the diatomic molecules of the first main row of the periodic table, but had been used earlier by Linus Pauling for H2+.
Mathematical description
An initial assumption is that the number of molecular orbitals is equal to the number of atomic orbitals included in the linear expansion. In a sense, n atomic orbitals combine to form n molecular orbitals, which can be numbered i = 1 to n and which may not all be the same. The expression (linear expansion) for the i th molecular orbital would be:
or
where is a molecular orbital represented as the sum of n atomic orbitals , each multiplied by a corresponding coefficient , and r (numbered 1 to n) represents which atomic orbital is combined in the term. The coefficients are the weights of the contributions of the n atomic orbitals to the molecular orbital. The Hartree–Fock method is used to obtain the coefficients of the expansion.
The orbitals are thus expressed as linear combinations of basis functions, and the basis functions are single-electron functions which may or may not be centered on the nuclei of the component atoms of the molecule. In either case the basis functions are usually also referred to as atomic orbitals (even though only in the former case this name seems to be adequate). The atomic orbitals used are typically those of hydrogen-like atoms since these are known analytically i.e. Slater-type orbitals but other choices are possible such as the Gaussian functions from standard basis sets or the pseudo-atomic orbitals from plane-wave pseudopotentials.
By minimizing the total energy of the system, an appropriate set of coefficients of the linear combinations is determined. This quantitative approach is now known as the Hartree–Fock method. However, since the development of computational chemistry, the LCAO method often refers not to an actual optimization of the wave function but to a qualitative discussion which is very useful for predicting and rationalizing results obtained via more modern methods. In this case, the shape of the molecular orbitals and their respective energies are deduced approximately from comparing the energies of the atomic orbitals of the individual atoms (or molecular fragments) and applying some recipes known as level repulsion and the like. The graphs that are plotted to make this discussion clearer are called correlation diagrams. The required atomic orbital energies can come from calculations or directly from experiment via Koopmans' theorem.
This is done by using the symmetry of the molecules and orbitals involved in bonding, and thus is sometimes called symmetry adapted linear combination (SALC). The first step in this process is assigning a point group to the molecule. Each operation in the point group is performed upon the molecule. The number of bonds that are unmoved is the character of that operation. This reducible representation is decomposed into the sum of irreducible representations. These irreducible representations correspond to the symmetry of the orbitals involved.
Molecular orbital diagrams provide simple qualitative LCAO treatment. The Hückel method, the extended Hückel method and the Pariser–Parr–Pople method, provide some quantitative theories.
| Physical sciences | Bond structure | Chemistry |
405847 | https://en.wikipedia.org/wiki/AKS%20primality%20test | AKS primality test | The AKS primality test (also known as Agrawal–Kayal–Saxena primality test and cyclotomic AKS test) is a deterministic primality-proving algorithm created and published by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, computer scientists at the Indian Institute of Technology Kanpur, on August 6, 2002, in an article titled "PRIMES is in P". The algorithm was the first one which is able to determine in polynomial time, whether a given number is prime or composite without relying on mathematical conjectures such as the generalized Riemann hypothesis. The proof is also notable for not relying on the field of analysis. In 2006 the authors received both the Gödel Prize and Fulkerson Prize for their work.
Importance
AKS is the first primality-proving algorithm to be simultaneously general, polynomial-time, deterministic, and unconditionally correct. Previous algorithms had been developed for centuries and achieved three of these properties at most, but not all four.
The AKS algorithm can be used to verify the primality of any general number given. Many fast primality tests are known that work only for numbers with certain properties. For example, the Lucas–Lehmer test works only for Mersenne numbers, while Pépin's test can be applied to Fermat numbers only.
The maximum running time of the algorithm can be bounded by a polynomial over the number of digits in the target number. ECPP and APR conclusively prove or disprove that a given number is prime, but are not known to have polynomial time bounds for all inputs.
The algorithm is guaranteed to distinguish deterministically whether the target number is prime or composite. Randomized tests, such as Miller–Rabin and Baillie–PSW, can test any given number for primality in polynomial time, but are known to produce only a probabilistic result.
The correctness of AKS is not conditional on any subsidiary unproven hypothesis. In contrast, Miller's version of the Miller–Rabin test is fully deterministic and runs in polynomial time over all inputs, but its correctness depends on the truth of the yet-unproven generalized Riemann hypothesis.
While the algorithm is of immense theoretical importance, it is not used in practice, rendering it a galactic algorithm. For 64-bit inputs, the Baillie–PSW test is deterministic and runs many orders of magnitude faster. For larger inputs, the performance of the (also unconditionally correct) ECPP and APR tests is far superior to AKS. Additionally, ECPP can output a primality certificate that allows independent and rapid verification of the results, which is not possible with the AKS algorithm.
Concepts
The AKS primality test is based upon the following theorem: Given an integer and integer coprime to , is prime if and only if the polynomial congruence relation
holds within the polynomial ring . Note that denotes the indeterminate which generates this polynomial ring.
This theorem is a generalization to polynomials of Fermat's little theorem. In one direction it can easily be proven using the binomial theorem together with the following property of the binomial coefficient:
for all if is prime.
While the relation () constitutes a primality test in itself, verifying it takes exponential time: the brute force approach would require the expansion of the polynomial and a reduction of the resulting coefficients.
The congruence is an equality in the polynomial ring . Evaluating in a quotient ring of creates an upper bound for the degree of the polynomials involved. The AKS evaluates the equality in , making the computational complexity dependent on the size of . For clarity, this is expressed as the congruence
which is the same as:
for some polynomials and .
Note that all primes satisfy this relation (choosing in () gives (), which holds for prime). This congruence can be checked in polynomial time when is polynomial to the digits of . The AKS algorithm evaluates this congruence for a large set of values, whose size is polynomial to the digits of . The proof of validity of the AKS algorithm shows that one can find an and a set of values with the above properties such that if the congruences hold then is a power of a prime.
History and running time
In the first version of the above-cited paper, the authors proved the asymptotic time complexity of the algorithm to be (using Õ from big O notation)—the twelfth power of the number of digits in n times a factor that is polylogarithmic in the number of digits. However, this upper bound was rather loose; a widely-held conjecture about the distribution of the Sophie Germain primes would, if true, immediately cut the worst case down to .
In the months following the discovery, new variants appeared (Lenstra 2002, Pomerance 2002, Berrizbeitia 2002, Cheng 2003, Bernstein 2003a/b, Lenstra and Pomerance 2003), which improved the speed of computation greatly. Owing to the existence of the many variants, Crandall and Papadopoulos refer to the "AKS-class" of algorithms in their scientific paper "On the implementation of AKS-class primality tests", published in March 2003.
In response to some of these variants, and to other feedback, the paper "PRIMES is in P" was updated with a new formulation of the AKS algorithm and of its proof of correctness. (This version was eventually published in Annals of Mathematics.) While the basic idea remained the same, r was chosen in a new manner, and the proof of correctness was more coherently organized. The new proof relied almost exclusively on the behavior of cyclotomic polynomials over finite fields. The new upper bound on time complexity was , later reduced using additional results from sieve theory to .
In 2005, Pomerance and Lenstra demonstrated a variant of AKS that runs in operations, leading to another updated version of the paper. Agrawal, Kayal and Saxena proposed a variant which would run in if Agrawal's conjecture were true; however, a heuristic argument by Pomerance and Lenstra suggested that it is probably false.
The algorithm
The algorithm is as follows:
Input: integer .
Check if n is a perfect power: if for integers and , then output composite.
Find the smallest r such that . If r and n are not coprime, then output composite.
For all 2 ≤ a ≤ min (r, n−1), check that a does not divide n: If a|n for some 2 ≤ a ≤ min (r, n−1), then output composite.
If n ≤ r, then output prime.
For to do
if (X+a)n ≠ Xn+a (mod Xr − 1,n), then output composite;
Output prime.
Here ordr(n) is the multiplicative order of n modulo r, log2 is the binary logarithm, and is Euler's totient function of r.
Step 3 is shown in the paper as checking 1 < gcd(a,n) < n for all a ≤ r. It can be seen this is equivalent to trial division up to r, which can be done very efficiently without using gcd. Similarly the comparison in step 4 can be replaced by having the trial division return prime once it has checked all values up to and including
Once beyond very small inputs, step 5 dominates the time taken. The essential reduction in complexity (from exponential to polynomial) is achieved by performing all calculations in the finite ring
consisting of elements. This ring contains only the monomials , and the coefficients are in which has elements, all of them codable within bits.
Most later improvements made to the algorithm have concentrated on reducing the size of r, which makes the core operation in step 5 faster, and in reducing the size of s, the number of loops performed in step 5. Typically these changes do not change the computational complexity, but can lead to many orders of magnitude less time taken; for example, Bernstein's final version has a theoretical speedup by a factor of over 2 million.
Proof of validity outline
For the algorithm to be correct, all steps that identify n must be correct. Steps 1, 3, and 4 are trivially correct, since they are based on direct tests of the divisibility of n. Step 5 is also correct: since (2) is true for any choice of a coprime to n and r if n is prime, an inequality means that n must be composite.
The difficult part of the proof is showing that step 6 is true. Its proof of correctness is based on the upper and lower bounds of a multiplicative group in constructed from the (X + a) binomials that are tested in step 5. Step 4 guarantees that these binomials are distinct elements of . For the particular choice of r, the bounds produce a contradiction unless n is prime or a power of a prime. Together with the test of step 1, this implies that n is always prime at step 6.
Example 1: n = 31 is prime
Where PolynomialMod is a term-wise modulo reduction of the polynomial. e.g. PolynomialMod[x+2x2+3x3, 3] = x+2x2+0x3
| Mathematics | Prime numbers | null |
405873 | https://en.wikipedia.org/wiki/Bloodstream%20infection | Bloodstream infection | Bloodstream infections (BSIs) are infections of blood caused by blood-borne pathogens. The detection of microbes in the blood (most commonly accomplished by blood cultures) is always abnormal. A bloodstream infection is different from sepsis, which is characterized by severe inflammatory or immune responses of the host organism to pathogens.
Bacteria can enter the bloodstream as a severe complication of infections (like pneumonia or meningitis), during surgery (especially when involving mucous membranes such as the gastrointestinal tract), or due to catheters and other foreign bodies entering the arteries or veins (including during intravenous drug abuse). Transient bacteremia can result after dental procedures or brushing of teeth.
Bacteremia can have several important health consequences. Immune responses to the bacteria can cause sepsis and septic shock, which, particularly if severe sepsis and then septic shock occurs, have high mortality rates, especially if not treated quickly (though, if treated early, currently mild sepsis can usually be dealt with successfully). Bacteria can also spread via the blood to other parts of the body (which is called hematogenous spread), causing infections away from the original site of infection, such as endocarditis or osteomyelitis. Treatment for bacteremia is with antibiotics, and prevention with antibiotic prophylaxis can be given in high risk situations.
Signs and symptoms
Bacteremia is typically transient and is quickly removed from the blood by the immune system.
Bacteremia frequently evokes a response from the immune system called sepsis, which consists of symptoms such as fever, chills, and hypotension. Severe immune responses to bacteremia may result in septic shock and multiple organ dysfunction syndrome, which are potentially fatal.
Types
Based on type of causative microbe, bloodstream infections are of many types:
Causes
Bacteria can enter the bloodstream in a number of different ways. However, for each major classification of bacteria (gram negative, gram positive, or anaerobic) there are characteristic sources or routes of entry into the bloodstream that lead to bacteremia. Causes of bacteremia can additionally be divided into healthcare-associated (acquired during the process of receiving care in a healthcare facility) or community-acquired (acquired outside of a health facility, often prior to hospitalization).
Gram positive bacteremia
Gram positive bacteria are an increasingly important cause of bacteremia. Staphylococcus, streptococcus, and enterococcus species are the most important and most common species of gram-positive bacteria that can enter the bloodstream. These bacteria are normally found on the skin or in the gastrointestinal tract.
Staphylococcus aureus is the most common cause of healthcare-associated bacteremia in North and South America and is also an important cause of community-acquired bacteremia. Skin ulceration or wounds, respiratory tract infections, and IV drug use are the most important causes of community-acquired staph aureus bacteremia. In healthcare settings, intravenous catheters, urinary tract catheters, and surgical procedures are the most common causes of staph aureus bacteremia.
There are many different types of streptococcal species that can cause bacteremia. Group A streptococcus (GAS) typically causes bacteremia from skin and soft tissue infections. Group B streptococcus is an important cause of bacteremia in neonates, often immediately following birth. Viridans streptococci species are normal bacterial flora of the mouth. Viridans strep can cause temporary bacteremia after eating, toothbrushing, or flossing. More severe bacteremia can occur following dental procedures or in patients receiving chemotherapy. Finally, Streptococcus bovis is a common cause of bacteremia in patients with colon cancer.
Enterococci are an important cause of healthcare-associated bacteremia. These bacteria commonly live in the gastrointestinal tract and female genital tract. Intravenous catheters, urinary tract infections and surgical wounds are all risk factors for developing bacteremia from enterococcal species. Resistant enterococcal species can cause bacteremia in patients who have had long hospital stays or frequent antibiotic use in the past (see antibiotic misuse).
Gram negative bacteremia
Gram negative bacterial species are responsible for approximately 24% of all cases of healthcare-associated bacteremia and 45% of all cases of community-acquired bacteremia. In general, gram negative bacteria enter the bloodstream from infections in the respiratory tract, genitourinary tract, gastrointestinal tract, or hepatobiliary system. Gram-negative bacteremia occurs more frequently in elderly populations (65 years or older) and is associated with higher morbidity and mortality in this population.E.coli is the most common cause of community-acquired bacteremia accounting for approximately 75% of cases. E.coli bacteremia is usually the result of a urinary tract infection. Other organisms that can cause community-acquired bacteremia include Pseudomonas aeruginosa, Klebsiella pneumoniae, and Proteus mirabilis. Salmonella infection, despite mainly only resulting in gastroenteritis in the developed world, is a common cause of bacteremia in Africa. It principally affects children who lack antibodies to Salmonella and HIV+ patients of all ages.
Among healthcare-associated cases of bacteremia, gram negative organisms are an important cause of bacteremia in the ICU. Catheters in the veins, arteries, or urinary tract can all create a way for gram negative bacteria to enter the bloodstream. Surgical procedures of the genitourinary tract, intestinal tract, or hepatobiliary tract can also lead to gram negative bacteremia. Pseudomonas and Enterobacter species are the most important causes of gram negative bacteremia in the ICU.
Bacteremia risk factors
There are several risk factors that increase the likelihood of developing bacteremia from any type of bacteria. These include:
HIV infection
Diabetes Mellitus
Chronic hemodialysis
Solid organ transplant
Stem cell transplant
Treatment with glucocorticoids
Liver failure
Asplenia
Mechanism
Bacteremia can travel through the blood stream to distant sites in the body and cause infection (hematogenous spread). Hematogenous spread of bacteria is part of the pathophysiology of certain infections of the heart (endocarditis), structures around the brain (meningitis), and tuberculosis of the spine (Pott's disease). Hematogenous spread of bacteria is responsible for many bone infections (osteomyelitis).
Prosthetic cardiac implants (for example artificial heart valves) are especially vulnerable to infection from bacteremia. Prior to widespread use of vaccines, occult bacteremia was an important consideration in febrile children that appeared otherwise well.
Diagnosis
Bacteremia is most commonly diagnosed by blood culture, in which a sample of blood drawn from the vein by needle puncture is allowed to incubate with a medium that promotes bacterial growth. If bacteria are present in the bloodstream at the time the sample is obtained, the bacteria will multiply and can thereby be detected.
Any bacteria that incidentally find their way to the culture medium will also multiply. For example, if the skin is not adequately cleaned before needle puncture, contamination of the blood sample with normal bacteria that live on the surface of the skin can occur. For this reason, blood cultures must be drawn with great attention to sterile process. The presence of certain bacteria in the blood culture, such as Staphylococcus aureus, Streptococcus pneumoniae, and Escherichia coli almost never represent a contamination of the sample. On the other hand, contamination may be more highly suspected if organisms like Staphylococcus epidermidis or Cutibacterium acnes grow in the blood culture.
Two blood cultures drawn from separate sites of the body are often sufficient to diagnose bacteremia. Two out of two cultures growing the same type of bacteria usually represents a real bacteremia, particularly if the organism that grows is not a common contaminant. One out of two positive cultures will usually prompt a repeat set of blood cultures to be drawn to confirm whether a contaminant or a real bacteremia is present. The patient's skin is typically cleaned with an alcohol-based product prior to drawing blood to prevent contamination. Blood cultures may be repeated at intervals to determine if persistent—rather than transient—bacteremia is present.
Prior to drawing blood cultures, a thorough patient history should be taken with particular regard to presence of both fevers and chills, other focal signs of infection such as in the skin or soft tissue, a state of immunosuppression, or any recent invasive procedures.
Ultrasound of the heart is recommended in all those with bacteremia due to Staphylococcus aureus to rule out infectious endocarditis.
Definition
Bacteremia is the presence of bacteria in the bloodstream that are alive and capable of reproducing. It is a type of bloodstream infection. Bacteremia is defined as either a primary or secondary process. In primary bacteremia, bacteria have been directly introduced into the bloodstream. Injection drug use may lead to primary bacteremia. In the hospital setting, use of blood vessel catheters contaminated with bacteria may also lead to primary bacteremia. Secondary bacteremia occurs when bacteria have entered the body at another site, such as the cuts in the skin, or the mucous membranes of the lungs (respiratory tract), mouth or intestines (gastrointestinal tract), bladder (urinary tract), or genitals. Bacteria that have infected the body at these sites may then spread into the lymphatic system and gain access to the bloodstream, where further spread can occur.
Bacteremia may also be defined by the timing of bacteria presence in the bloodstream: transient, intermittent, or persistent. In transient bacteremia, bacteria are present in the bloodstream for minutes to a few hours before being cleared from the body, and the result is typically harmless in healthy people. This can occur after manipulation of parts of the body normally colonized by bacteria, such as the mucosal surfaces of the mouth during tooth brushing, flossing, or dental procedures, or instrumentation of the bladder or colon. Intermittent bacteremia is characterized by periodic seeding of the same bacteria into the bloodstream by an existing infection elsewhere in the body, such as an abscess, pneumonia, or bone infection, followed by clearing of that bacteria from the bloodstream. This cycle will often repeat until the existing infection is successfully treated. Persistent bacteremia is characterized by the continuous presence of bacteria in the bloodstream. It is usually the result of an infected heart valve, a central line-associated bloodstream infection (CLABSI), an infected blood clot (suppurative thrombophlebitis), or an infected blood vessel graft. Persistent bacteremia can also occur as part of the infection process of typhoid fever, brucellosis, and bacterial meningitis. Left untreated, conditions causing persistent bacteremia can be potentially fatal.
Bacteremia is clinically distinct from sepsis, which is a condition where the blood stream infection is associated with an inflammatory response from the body, often causing abnormalities in body temperature, heart rate, breathing rate, blood pressure, and white blood cell count.
Treatment
The presence of bacteria in the blood almost always requires treatment with antibiotics. This is because there are high mortality rates from progression to sepsis if antibiotics are delayed. This is especially the case if the sepsis gets worse, and even more if it becomes severe sepsis (where organ damage begins), septic shock (the organ damage continues, which lowers the blood pressure to the point where special drugs are needed to help keep it high enough), or multiple organ dysfunction syndrome (where organ damage can quickly become fatal, even with supportive devices).
The treatment of bacteremia should begin with empiric antibiotic coverage. Any patient presenting with signs or symptoms of bacteremia or a positive blood culture should be started on intravenous antibiotics. The choice of antibiotic is determined by the most likely source of infection and by the characteristic organisms that typically cause that infection. Other important considerations include the patient's history of antibiotic use, the severity of the presenting symptoms, and any allergies to antibiotics. Empiric antibiotics should be narrowed, preferably to a single antibiotic, once the blood culture returns with a particular bacteria that has been isolated.
Gram positive bacteremia
The Infectious Disease Society of America (IDSA) recommends treating uncomplicated methicillin resistant staph aureus (MRSA) bacteremia with a 14-day course of intravenous vancomycin. Uncomplicated bacteremia is defined as having positive blood cultures for MRSA, but having no evidence of endocarditis, no implanted prostheses, negative blood cultures after 2–4 days of treatment, and signs of clinical improvement after 72 hrs.
The antibiotic treatment of choice for streptococcal and enteroccal infections differs by species. However, it is important to look at the antibiotic resistance pattern for each species from the blood culture to better treat infections caused by resistant organisms.
Gram negative bacteremia
The treatment of gram negative bacteremia is also highly dependent on the causative organism. Empiric antibiotic therapy should be guided by the most likely source of infection and the patient's past exposure to healthcare facilities. In particular, a recent history of exposure to a healthcare setting may necessitate the need for antibiotics with pseudomonas aeruginosa coverage or broader coverage for resistant organisms. Extended generation cephalosporins such as ceftriaxone or beta lactam/beta lactamase inhibitor antibiotics such as piperacillin-tazobactam are frequently used for the treatment of gram negative bacteremia.
Catheter-associated infections
For healthcare-associated bacteremia due to intravenous catheters, the IDSA has published guidelines for catheter removal. Short term catheters (in place <14 days) should be removed if bacteremia is caused by any gram negative bacteria, staph aureus, enterococci or mycobacteria. Long term catheters (>14 days) should be removed if the patient is developing signs or symptoms of sepsis or endocarditis, or if blood cultures remain positive for more than 72 hours.
| Biology and health sciences | Infectious diseases by site | Health |
405944 | https://en.wikipedia.org/wiki/Time%20complexity | Time complexity | In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.
Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size). In both cases, the time complexity is generally expressed as a function of the size of the input. Since this function is generally difficult to compute exactly, and the running time for small inputs is usually not consequential, one commonly focuses on the behavior of the complexity when the input size increases—that is, the asymptotic behavior of the complexity. Therefore, the time complexity is commonly expressed using big O notation, typically etc., where is the size in units of bits needed to represent the input.
Algorithmic complexities are classified according to the type of function appearing in the big O notation. For example, an algorithm with time complexity is a linear time algorithm and an algorithm with time complexity for some constant is a polynomial time algorithm.
Table of common time complexities
The following table summarizes some classes of commonly encountered time complexities. In the table, , i.e., polynomial in x.
Constant time
An algorithm is said to be constant time (also written as time) if the value of (the complexity of the algorithm) is bounded by a value that does not depend on the size of the input. For example, accessing any single element in an array takes constant time as only one operation has to be performed to locate it. In a similar manner, finding the minimal value in an array sorted in ascending order; it is the first element. However, finding the minimal value in an unordered array is not a constant time operation as scanning over each element in the array is needed in order to determine the minimal value. Hence it is a linear time operation, taking time. If the number of elements is known in advance and does not change, however, such an algorithm can still be said to run in constant time.
Despite the name "constant time", the running time does not have to be independent of the problem size, but an upper bound for the running time has to be independent of the problem size. For example, the task "exchange the values of and if necessary so that " is called constant time even though the time may depend on whether or not it is already true that . However, there is some constant such that the time required is always at most .
Logarithmic time
An algorithm is said to take logarithmic time when . Since and are related by a constant multiplier, and such a multiplier is irrelevant to big O classification, the standard usage for logarithmic-time algorithms is regardless of the base of the logarithm appearing in the expression of .
Algorithms taking logarithmic time are commonly found in operations on binary trees or when using binary search.
An algorithm is considered highly efficient, as the ratio of the number of operations to the size of the input decreases and tends to zero when increases. An algorithm that must access all elements of its input cannot take logarithmic time, as the time taken for reading an input of size is of the order of .
An example of logarithmic time is given by dictionary search. Consider a dictionary which contains entries, sorted in alphabetical order. We suppose that, for , one may access the th entry of the dictionary in a constant time. Let denote this th entry. Under these hypotheses, the test to see if a word is in the dictionary may be done in logarithmic time: consider , where denotes the floor function. If --that is to say, the word is exactly in the middle of the dictionary--then we are done. Else, if --i.e., if the word comes earlier in alphabetical order than the middle word of the whole dictionary--we continue the search in the same way in the left (i.e. earlier) half of the dictionary, and then again repeatedly until the correct word is found. Otherwise, if it comes after the middle word, continue similarly with the right half of the dictionary. This algorithm is similar to the method often used to find an entry in a paper dictionary. As a result, the search space within the dictionary decreases as the algorithm gets closer to the target word.
Polylogarithmic time
An algorithm is said to run in polylogarithmic time if its time is for some constant . Another way to write this is .
For example, matrix chain ordering can be solved in polylogarithmic time on a parallel random-access machine, and a graph can be determined to be planar in a fully dynamic way in time per insert/delete operation.
Sub-linear time
An algorithm is said to run in sub-linear time (often spelled sublinear time) if . In particular this includes algorithms with the time complexities defined above.
The specific term sublinear time algorithm commonly refers to randomized algorithms that sample a small fraction of their inputs and process them efficiently to approximately infer properties of the entire instance. This type of sublinear time algorithm is closely related to property testing and statistics.
Other settings where algorithms can run in sublinear time include:
Parallel algorithms that have linear or greater total work (allowing them to read the entire input), but sub-linear depth.
Algorithms that have guaranteed assumptions on the input structure. An important example are operations on data structures, e.g. binary search in a sorted array.
Algorithms that search for local structure in the input, for example finding a local minimum in a 1-D array (can be solved in time using a variant of binary search). A closely related notion is that of Local Computation Algorithms (LCA) where the algorithm receives a large input and queries to local information about some valid large output.
Linear time
An algorithm is said to take linear time, or time, if its time complexity is . Informally, this means that the running time increases at most linearly with the size of the input. More precisely, this means that there is a constant such that the running time is at most for every input of size . For example, a procedure that adds up all elements of a list requires time proportional to the length of the list, if the adding time is constant, or, at least, bounded by a constant.
Linear time is the best possible time complexity in situations where the algorithm has to sequentially read its entire input. Therefore, much research has been invested into discovering algorithms exhibiting linear time or, at least, nearly linear time. This research includes both software and hardware methods. There are several hardware technologies which exploit parallelism to provide this. An example is content-addressable memory. This concept of linear time is used in string matching algorithms such as the Boyer–Moore string-search algorithm and Ukkonen's algorithm.
Quasilinear time
An algorithm is said to run in quasilinear time (also referred to as log-linear time) if for some positive constant ; linearithmic time is the case . Using soft O notation these algorithms are . Quasilinear time algorithms are also for every constant and thus run faster than any polynomial time algorithm whose time bound includes a term for any .
Algorithms which run in quasilinear time include:
In-place merge sort,
Quicksort, , in its randomized version, has a running time that is in expectation on the worst-case input. Its non-randomized version has an running time only when considering average case complexity.
Heapsort, , merge sort, introsort, binary tree sort, smoothsort, patience sorting, etc. in the worst case
Fast Fourier transforms,
Monge array calculation,
In many cases, the running time is simply the result of performing a operation times (for the notation, see ). For example, binary tree sort creates a binary tree by inserting each element of the -sized array one by one. Since the insert operation on a self-balancing binary search tree takes time, the entire algorithm takes time.
Comparison sorts require at least comparisons in the worst case because , by Stirling's approximation. They also frequently arise from the recurrence relation .
Sub-quadratic time
An algorithm is said to be subquadratic time if .
For example, simple, comparison-based sorting algorithms are quadratic (e.g. insertion sort), but more advanced algorithms can be found that are subquadratic (e.g. shell sort). No general-purpose sorts run in linear time, but the change from quadratic to sub-quadratic is of great practical importance.
Polynomial time
An algorithm is said to be of polynomial time if its running time is upper bounded by a polynomial expression in the size of the input for the algorithm, that is, for some positive constant k. Problems for which a deterministic polynomial-time algorithm exists belong to the complexity class P, which is central in the field of computational complexity theory. Cobham's thesis states that polynomial time is a synonym for "tractable", "feasible", "efficient", or "fast".
Some examples of polynomial-time algorithms:
The selection sort sorting algorithm on n integers performs operations for some constant A. Thus it runs in time and is a polynomial-time algorithm.
All the basic arithmetic operations (addition, subtraction, multiplication, division, and comparison) can be done in polynomial time.
Maximum matchings in graphs can be found in polynomial time. In some contexts, especially in optimization, one differentiates between strongly polynomial time and weakly polynomial time algorithms.
These two concepts are only relevant if the inputs to the algorithms consist of integers.
Complexity classes
The concept of polynomial time leads to several complexity classes in computational complexity theory. Some important classes defined using polynomial time are the following.
P: The complexity class of decision problems that can be solved on a deterministic Turing machine in polynomial time
NP: The complexity class of decision problems that can be solved on a non-deterministic Turing machine in polynomial time
ZPP: The complexity class of decision problems that can be solved with zero error on a probabilistic Turing machine in polynomial time
RP: The complexity class of decision problems that can be solved with 1-sided error on a probabilistic Turing machine in polynomial time.
BPP: The complexity class of decision problems that can be solved with 2-sided error on a probabilistic Turing machine in polynomial time
BQP: The complexity class of decision problems that can be solved with 2-sided error on a quantum Turing machine in polynomial time
P is the smallest time-complexity class on a deterministic machine which is robust in terms of machine model changes. (For example, a change from a single-tape Turing machine to a multi-tape machine can lead to a quadratic speedup, but any algorithm that runs in polynomial time under one model also does so on the other.) Any given abstract machine will have a complexity class corresponding to the problems which can be solved in polynomial time on that machine.
Superpolynomial time
An algorithm is defined to take superpolynomial time if T(n) is not bounded above by any polynomial. Using little omega notation, it is ω(nc) time for all constants c, where n is the input parameter, typically the number of bits in the input.
For example, an algorithm that runs for 2n steps on an input of size n requires superpolynomial time (more specifically, exponential time).
An algorithm that uses exponential resources is clearly superpolynomial, but some algorithms are only very weakly superpolynomial. For example, the Adleman–Pomerance–Rumely primality test runs for time on n-bit inputs; this grows faster than any polynomial for large enough n, but the input size must become impractically large before it cannot be dominated by a polynomial with small degree.
An algorithm that requires superpolynomial time lies outside the complexity class P. Cobham's thesis posits that these algorithms are impractical, and in many cases they are. Since the P versus NP problem is unresolved, it is unknown whether NP-complete problems require superpolynomial time.
Quasi-polynomial time
Quasi-polynomial time algorithms are algorithms whose running time exhibits quasi-polynomial growth, a type of behavior that may be slower than polynomial time but yet is significantly faster than exponential time. The worst case running time of a quasi-polynomial time algorithm is for some fixed When this gives polynomial time, and for it gives sub-linear time.
There are some problems for which we know quasi-polynomial time algorithms, but no polynomial time algorithm is known. Such problems arise in approximation algorithms; a famous example is the directed Steiner tree problem, for which there is a quasi-polynomial time approximation algorithm achieving an approximation factor of (n being the number of vertices), but showing the existence of such a polynomial time algorithm is an open problem.
Other computational problems with quasi-polynomial time solutions but no known polynomial time solution include the planted clique problem in which the goal is to find a large clique in the union of a clique and a random graph. Although quasi-polynomially solvable, it has been conjectured that the planted clique problem has no polynomial time solution; this planted clique conjecture has been used as a computational hardness assumption to prove the difficulty of several other problems in computational game theory, property testing, and machine learning.
The complexity class QP consists of all problems that have quasi-polynomial time algorithms. It can be defined in terms of DTIME as follows.
Relation to NP-complete problems
In complexity theory, the unsolved P versus NP problem asks if all problems in NP have polynomial-time algorithms. All the best-known algorithms for NP-complete problems like 3SAT etc. take exponential time. Indeed, it is conjectured for many natural NP-complete problems that they do not have sub-exponential time algorithms. Here "sub-exponential time" is taken to mean the second definition presented below. (On the other hand, many graph problems represented in the natural way by adjacency matrices are solvable in subexponential time simply because the size of the input is the square of the number of vertices.) This conjecture (for the k-SAT problem) is known as the exponential time hypothesis. Since it is conjectured that NP-complete problems do not have quasi-polynomial time algorithms, some inapproximability results in the field of approximation algorithms make the assumption that NP-complete problems do not have quasi-polynomial time algorithms. For example, see the known inapproximability results for the set cover problem.
Sub-exponential time
The term sub-exponential time is used to express that the running time of some algorithm may grow faster than any polynomial but is still significantly smaller than an exponential. In this sense, problems that have sub-exponential time algorithms are somewhat more tractable than those that only have exponential algorithms. The precise definition of "sub-exponential" is not generally agreed upon, however the two most widely used are below.
First definition
A problem is said to be sub-exponential time solvable if it can be solved in running times whose logarithms grow smaller than any given polynomial. More precisely, a problem is in sub-exponential time if for every there exists an algorithm which solves the problem in time O(2nε). The set of all such problems is the complexity class SUBEXP which can be defined in terms of DTIME as follows.
This notion of sub-exponential is non-uniform in terms of ε in the sense that ε is not part of the input and each ε may have its own algorithm for the problem.
Second definition
Some authors define sub-exponential time as running times in . This definition allows larger running times than the first definition of sub-exponential time. An example of such a sub-exponential time algorithm is the best-known classical algorithm for integer factorization, the general number field sieve, which runs in time about where the length of the input is . Another example was the graph isomorphism problem, which the best known algorithm from 1982 to 2016 solved in However, at STOC 2016 a quasi-polynomial time algorithm was presented.
It makes a difference whether the algorithm is allowed to be sub-exponential in the size of the instance, the number of vertices, or the number of edges. In parameterized complexity, this difference is made explicit by considering pairs of decision problems and parameters k. SUBEPT is the class of all parameterized problems that run in time sub-exponential in k and polynomial in the input size n:
More precisely, SUBEPT is the class of all parameterized problems for which there is a computable function with and an algorithm that decides L in time .
Exponential time hypothesis
The exponential time hypothesis (ETH) is that 3SAT, the satisfiability problem of Boolean formulas in conjunctive normal form with at most three literals per clause and with n variables, cannot be solved in time 2o(n). More precisely, the hypothesis is that there is some absolute constant such that 3SAT cannot be decided in time 2cn by any deterministic Turing machine. With m denoting the number of clauses, ETH is equivalent to the hypothesis that kSAT cannot be solved in time 2o(m) for any integer . The exponential time hypothesis implies P ≠ NP.
Exponential time
An algorithm is said to be exponential time, if T(n) is upper bounded by 2poly(n), where poly(n) is some polynomial in n. More formally, an algorithm is exponential time if T(n) is bounded by O(2nk) for some constant k. Problems which admit exponential time algorithms on a deterministic Turing machine form the complexity class known as EXP.
Sometimes, exponential time is used to refer to algorithms that have T(n) = 2O(n), where the exponent is at most a linear function of n. This gives rise to the complexity class E.
Factorial time
An algorithm is said to be factorial time if T(n) is upper bounded by the factorial function n!. Factorial time is a subset of exponential time (EXP) because for all . However, it is not a subset of E.
An example of an algorithm that runs in factorial time is bogosort, a notoriously inefficient sorting algorithm based on trial and error. Bogosort sorts a list of n items by repeatedly shuffling the list until it is found to be sorted. In the average case, each pass through the bogosort algorithm will examine one of the n! orderings of the n items. If the items are distinct, only one such ordering is sorted. Bogosort shares patrimony with the infinite monkey theorem.
Double exponential time
An algorithm is said to be double exponential time if T(n) is upper bounded by 22poly(n), where poly(n) is some polynomial in n. Such algorithms belong to the complexity class 2-EXPTIME.
Well-known double exponential time algorithms include:
Decision procedures for Presburger arithmetic
Computing a Gröbner basis (in the worst case)
Quantifier elimination on real closed fields takes at least double exponential time, and can be done in this time.
| Mathematics | Complexity theory | null |
406245 | https://en.wikipedia.org/wiki/Antonov%20An-225%20Mriya | Antonov An-225 Mriya | The Antonov An-225 Mriya (; NATO reporting name: Cossack) was a strategic airlift cargo aircraft designed and produced by the Antonov Design Bureau in the Soviet Union.
It was originally developed during the 1980s as an enlarged derivative of the Antonov An-124 airlifter for transporting Buran spacecraft. On 21 December 1988, the An-225 performed its maiden flight; only one aircraft was ever completed, although a second airframe with a slightly different configuration was partially built. After a brief period of use in the Soviet space programme, the aircraft was mothballed during the early 1990s. Towards the turn of the century, it was decided to refurbish the An-225 and reintroduce it for commercial operations, carrying oversized payloads for the operator Antonov Airlines. Multiple announcements were made regarding the potential completion of the second airframe, though its construction largely remained on hold due to a lack of funding. By 2009, it had reportedly been brought up to 60–70% completion.
With a maximum takeoff weight of , the An-225 held several records, including heaviest aircraft ever built and largest wingspan of any operational aircraft. It was commonly used to transport objects once thought impossible to move by air, such as 130-ton generators, wind turbine blades, and diesel locomotives. Additionally, both Chinese and Russian officials had announced separate plans to adapt the An-225 for use in their respective space programmes. The Mriya routinely attracted a high degree of public interest, attaining a global following due to its size and its uniqueness.
The only completed An-225 was destroyed in the Battle of Antonov Airport in 2022 during the Russian invasion of Ukraine. Ukrainian president Volodymyr Zelenskyy announced plans to complete the second An-225 to replace the destroyed aircraft.
Development
Work on the Antonov An-225 began in 1984 with a request from the Soviet government for a large airlifter as a replacement for the Myasishchev VM-T. The specifics of this request included the ability to carry a maximum payload of , both externally and internally, while operating from any runway of at least . As originally set out, the mission and objectives were broadly identical to that of the United States' Shuttle Carrier Aircraft, having been designed to airlift the Energia rocket's boosters and the Buran-class orbiters for the Soviet space program. Furthermore, a relatively short timetable for the delivery of the completed aircraft meant that development would have to proceed at a rapid pace.
Accordingly, the Antonov Design Bureau produced a derivative of their existing Antonov An-124 Ruslan airlifter. The aircraft was stretched via the addition of fore and aft fuselage barrel sections, while a new enlarged wing centre was designed that facilitated the carriage of an additional pair of Progress D-18T turbofan engines, increasing the total from four to six powerplants. A completely new tail was also required to handle the wake turbulence generated by the bulky external loads that would be carried on the aircraft's upper fuselage. Despite the novelty of its scale, the design of the An-225 was largely conventional. The lead designer of the An-225 (and the An-124) was Viktor Tolmachev.
On 21 December 1988, the An-225 performed its maiden flight. It made its first public appearance outside the Soviet Union at the 1989 Paris Air Show where it was presented carrying a Buran orbiter. One year later, it performed a flying display for the public days at the Farnborough Air Show. While two aircraft had been ordered, only a single An-225, (registration CCCP-82060, later UR-82060) was finished. It could carry ultra-heavy and oversized freight weighing up to internally or on the upper fuselage. Cargo on the upper fuselage can be up to in length.
A second An-225 was partially built during the late 1980s for the Soviet space program, however, work on the airframe was suspended following the collapse of the Soviet Union. By 2000, the need for additional An-225 capacity had become apparent; during September 2006, it was decided that the second An-225 would be completed, a feat that was at one point scheduled to occur around 2008. However, the work was subject to repeated delays. By August 2009, the aircraft had not been completed and work had been abandoned. In May 2011, the Antonov CEO reportedly stated that the completion of the second An-225, which would have a carrying capacity of 250 tons, requires at least $300 million; upon the provision of sufficient financing, its completion could be achieved in three years. According to different sources, the second aircraft was 60–70% complete by 2016.
The revival of space activities involving the An-225 was repeatedly announced and speculated upon throughout its life. During the early 2000s, studies were conducted into the production of an even larger An-225 derivative, the eight-engined Antonov An-325, which was intended to be used in conjunction with Russia's in-development MAKS space plane. In April 2013, the Russian government announced plans to revive Soviet-era air launch projects that would use a purpose-built modification to the An-225 as a midair launchpad.
In May 2017, Airspace Industry Corporation of China (AICC)'s president, Zhang You-Sheng, informed a BBC reporter that AICC had first contemplated cooperation with Antonov in 2009 and made contact with them two years later. AICC intends to modernize the second unfinished An-225 and develop it into an air launch to orbit platform for commercial satellites at altitudes up to . The aviation media cast doubt on the production restart, speculating that the ongoing Russo-Ukrainian war would prevent various necessary components that would have been sourced from Russia from being delivered; it may be possible that China could manufacture them instead. That project did not move forward but UkrOboronProm, the parent company of Antonov, had continued to seek partners to finish the second airframe.
On 25 March 2020, the first An-225 commenced a series of test flights from Hostomel Airport near Kyiv, after more than a year out of service, for the installation of a domestically designed power management and control system.
Design
The Antonov An-225 was a strategic airlift cargo aircraft that retained many similarities with the preceding An-124 airlifter that it was derived from. It had a longer fuselage and cargo deck due to the addition of fuselage barrel extensions that were fitted both fore and aft of the wings. The wings, which were anhedral, also received root extensions to increase their span. The flight control surfaces were controlled via fly-by-wire and powered by triple-redundant hydraulics. Furthermore, the empennage of the An-225 was a twin tail with an oversized, swept-back horizontal stabilizer, having been redesigned from the single vertical stabilizer of the An-124. The use of a twin tail arrangement was essential to enable the aircraft to carry its bulky external loads that would generate wake turbulence, disturbing the airflow around a conventional tail.
The An-225 was powered by a total of six Progress D-18T turbofan engines, two more than the An-124, the addition of which was facilitated by the redesigned wing root area. An increased-capacity landing gear system with 32 wheels was designed, some of which are steerable; these enable the airlifter to turn within a runway. Akin to its An-124 predecessor, the An-225 incorporated a nose gear designed to "kneel" so cargo can be more easily loaded and unloaded. Additional measures to ease loading and unloading activities included the four overhead cargo cranes that could move along the whole length of the cargo hold, each of which was capable of lifting up to . To facilitate the attachment of external loads, such as the Buran orbiter, various mounting points were present along the upper surface of the fuselage.
Unlike the An-124, the An-225 was not intended for tactical airlifting and was not designed for short-field operations. Accordingly, the An-225 does not have a rear cargo door or ramp, as are present on the An-124, these features having been eliminated in order to save weight. The cargo hold was in volume; wide, high, and long—longer than the first flight of the Wright Flyer. The cargo hold, which was pressurized and furnished with extensive soundproofing, could contain up to 80 standard-dimension cars, 16 intermodal containers, or up to of general cargo.
The flight deck of the An-225 was at the front of the upper deck, which was accessed via a ladder from the lower deck. This flight deck was largely identical to that of the An-124, save for the presence of additional controls to manage the additional pair of engines. To the rear of the flight deck was an array of compartments which, amongst other things, accommodated the crew stations for the aircraft's two flight engineers, navigator, and communication specialist, along with off-duty rest areas, including beds, which facilitated long range missions to be flown. Even when fully loaded, the An-225 was capable of flying non-stop across great distances, such as between New York and Los Angeles.
As originally constructed, the An-225 had a maximum gross weight of , however, between 2000 and 2001, the aircraft received numerous modifications at a cost of million, such as the addition of a reinforced floor, which increased the maximum gross weight to . Both the earlier and later takeoff weights establish the An-225 as the world's heaviest aircraft, exceeding the weight of the double-deck Airbus A380 airliner. Airbus claims to have improved upon the An-225's maximum landing weight by landing an A380 at during testing.
Operational history
The Antonov An-225 Mriya was originally operated between 1988 and 1991 as the prime method of transporting Buran-class orbiters for the Soviet space program. Its first pilot was Oleksandr Halunenko, who continued flying it until 2004. "Antonov Airlines" was concurrently founded in 1989 after it was set up as a holding company by the Antonov Design Bureau as a heavy airlift shipping corporation. This company was to be based in Kyiv, Ukraine, and operate from London Luton Airport in partnership with Air Foyle HeavyLift. While operations began with a fleet of four An-124-100s and three Antonov An-12s, the need for aircraft larger than the An-124 became apparent by the late 1990s.
By this time, the Soviet Union was no longer in existence and the Buran program had been terminated; consequently, the sole completed An-225 was left unused and without a purpose. As early as 1990, Antonov officials were openly speaking on their ambitions for the aircraft to enter commercial use. Despite this, in 1994, it was decided to put the An-225 into long-term storage. During this time, all six of its engines were removed for use on various An-124s, while the second uncompleted An-225 airframe was also stored. As the 1990s progressed, it became clear that there was sufficient demand for a cargo liner even bigger than the An-124. Accordingly, it was decided that the first An-225 would be restored.
The aircraft was re-engined, received modifications to modernise and better adapt it to heavy cargo transport operations, and placed back in service under the management of Antonov Airlines. It became the workhorse of the Antonov Airlines fleet, transporting objects once thought impossible to move by air, such as 130-ton generators, wind turbine blades, and even diesel locomotives. It also became an asset to international relief organizations for its ability to quickly transport huge quantities of emergency supplies during multiple disaster-relief operations.
Under Antonov Airlines, the An-225 received its type certificate from the Interstate Aviation Committee Aviation Register (IAC AR) on 23 May 2001. The type's first flight in commercial service departed from Stuttgart, Germany, on 3 January 2002, and flew to Thumrait, Oman, with 216,000 prepared meals for American military personnel based in the region. This vast number of ready meals was transported on 375 pallets and weighed 187.5 tons. The An-225 was later contracted by the Canadian and U.S. governments to transport military supplies to the Middle East in support of coalition forces. An example of the cost of shipping cargo by An-225 was over (about ) for flying a chimney duct from Billund, Denmark, to Kazakhstan in 2004.
During 2016, Antonov Airlines ceased cooperation with Air Foyle and partnered with Volga-Dnepr instead. This led to the An-225's blue and yellow paint scheme, added in 2009. These matched the colors of the Ukrainian flag and led to the An-225 becoming "Ukraine's winged ambassador to the world," in the words of The New York Times.
When the COVID-19 pandemic impacted the world in early 2020, the An-225 participated in the relief effort by conducting flights to deliver medical supplies from China to other parts of the world.
The aircraft was popular with aviation enthusiasts, who frequently visited airports to view its scheduled arrivals and departures.
Records
On 11 August 2009, the heaviest single cargo item ever sent by air was loaded onto the An-225. At long and wide, its consignment, a generator for a gas power plant in Armenia along with its loading frame, represented a payload of , It also transported a total payload of on a commercial flight.
On 11 September 2001, carrying five main battle tanks at a record load of of cargo, the An-225 flew at an altitude of up to over a closed circuit of at a speed of . During 2017, the hired cost was () per hour.
On 11 June 2010, the An-225 carried the world's longest piece of air cargo, two test wind turbine blades from Tianjin, China, to Skrydstrup, Denmark.
On 27 September 2012, the An-225 hosted the highest altitude art exhibition in the world at above sea level during the AviaSvit-XX1 Aerospace Show at Antonov Airport. The exhibition was part of the Globus Gallery based in Kyiv and consisted of 500 artworks by 120 Ukrainian artists.
In total, the An-225 has set 240 world records, which is unique in aviation.
Destruction
The aircraft's last commercial mission was from 2 to 5 February 2022, to collect almost 90 tons of COVID-19 test kits from Tianjin, China, and deliver them to Billund, Denmark, via Bishkek, Kyrgyzstan. From there, it returned on 5 February to its base at Antonov Airport in Hostomel, where it underwent an engine swap. On the advice of NATO it was prepared for evacuation, scheduled for the morning of 24 February, but on that day Russia invaded, with the airfield being one of their first targets. A ban on civilian flights was quickly enacted by Ukrainian authorities. During the ensuing Battle of Antonov Airport, the runway was rendered unusable.
On 24 February, the An-225 was said to be intact. On 27 February, a photo was posted on Twitter of an object tentatively identified as the An-225 on fire in its hangar. A report by the Ukrainian edition of Radio Liberty stated that the airplane was destroyed during the Battle of Antonov Airport, which was repeated by Foreign Minister Dmytro Kuleba and by Ukroboronprom, Antonov's parent organisation. The Antonov company initially refused to confirm or deny the reports, and said it was still investigating them.
Also on 27 February, a press release by Ukroboronprom stated that the An-225 had been destroyed by Russian forces. Several other aircraft were in the same hangar as the An-225 at the time of its destruction, and were also destroyed or damaged during the battle; these include a Hungarian-registered Cessna 152, which was crushed by the An-225's left wingtip after the latter fell on top of it.
Ukroboronprom said that they planned to rebuild the plane at the Russians' expense. The statement said: "The restoration is estimated to take over 3 billion USD and over five years. Our task is to ensure that these costs are covered by the Russian Federation, which has caused intentional damage to Ukraine's aviation and the air cargo sector." The Ukrainian government also said that it would be rebuilt.
Aftermath
On 1 March, a new photograph, taken since the initial conflict, was tentatively identified as the tail of the aircraft protruding from its hangar, suggesting that it remained at least partly intact, however, further evidence proved to show that the aircraft is inoperable due to the extreme damage it sustained. On 3 March, a video circulated on social media, showing the aircraft burning inside the hangar alongside several Russian trucks, confirming its likely destruction. Nonetheless, Antonov stated again that until the aircraft is inspected by experts, its official status could not be fully known. On 4 March, footage on Russian state television Channel One showed the first clear ground images of the destroyed aircraft, with much of the front section missing. Following Russia's withdrawal from northern Ukraine, the second unfinished aircraft airframe was reported to be intact, despite Russian artillery strikes on the hangar housing it at the Antonov factory at Sviatoshyn Airfield.
Major Dmytro Antonov, the pilot of the An-225, alleged on 19 March 2022 that Antonov Airlines knew that an invasion was imminent for quite some time, but did nothing to prevent the loss of the aircraft. On his YouTube channel, Antonov accused company management of not doing enough to prevent the destruction of the aircraft, after having been advised by NATO to move the aircraft (ready to fly status) to Leipzig, Germany, in advance. Multiple Antonov staff have denied his allegations.
On 1 April, drone footage of Hostomel Airport showed the destroyed Mriya, with the forward fuselage completely burned and destroyed, but with the wings partly intact. It was later revealed that the right wing had been broken, but was held up only by its engines resting on the ground.
Investigations into rebuilding the An-225 are being undertaken, including the possibilities of cannibalising the second, incomplete An-225, or salvaging the remnants of the first plane to finish the second. However, there are several obstacles to rebuilding. Many of the aircraft's Soviet-made components were from the 1980s and are no longer made. Engineers quote a price of US$350–500 million, although there is uncertainty regarding whether or not it would be commercially viable and worth the cost. However, Andrii Sovenko, a former An-225 pilot and aviation author, said:
On 20 May 2022, Ukrainian president Volodymyr Zelenskyy announced his intentions to complete the second An-225, to replace the destroyed aircraft and as a tribute to all the Ukrainian pilots killed during the war. In November 2022, Antonov confirmed plans to rebuild the aircraft at an estimated cost of $500 million. At the time, the company did not state whether parts from the wrecked aircraft and the incomplete airframe would be combined to create a new flying aircraft or where funding might come from. Four months later, Antonov confirmed that parts had been removed from the wrecked aircraft for future mating to the unfinished fuselage.
In March 2023, the Ukrainian government announced that it detained two of three Antonov officials suspected of preventing the Ukrainian National Guard from setting up defenses at Hostomel Airport in anticipation of an invasion.
In April 2023, Ukrainian prosecutors charged the former head of Antonov, Serhii Bychkov, with "official negligence" for failing to order the aircraft flown to Leipzig, Germany, ahead of the Russian invasion. The Ukraine Security Service (SBU) who investigated the case stated, "according to the investigation, on the eve of the full-scale invasion, the An-225 was in proper technical condition, which allowed it to fly outside Ukraine. Instead, the general director of the company did not give appropriate instruction regarding the evacuation of Mriya abroad. Such criminal actions of the official led to the destruction of the Ukrainian transport plane."
Former operators
Antonov Airlines for Soviet Buran program, the company (and aircraft) passed to Ukraine after the dissolution of the Soviet Union.
Antonov Airlines for commercial operations from 3 January 2002, until 24 February 2022, the sole aircraft was destroyed during the Battle of Antonov Airport.
Variants
An-224
Original proposal with a rear cargo door. Not built.
An-225
Variant without the rear cargo door. One built, second aircraft incomplete.
An-225-100
Designation applied to the An-225 after its 2000 modernization. Upgrades included a traffic collision avoidance system, improved communications and navigation equipment, and noise reduction features.
An-325
Proposed enlarged, eight-engined aircraft, specifically designed to launch spacecraft of various purposes into orbit. Initially designed for the MAKS program, the An-325 eventually evolved to a joint cooperation between British Aerospace and the Soviet Ministry of Aviation Industry as a part of the Interim HOTOL program. It remains unbuilt.
AKS
Intended to carry the Tupolev OOS air-launch-to-orbit spaceplane; a twin-fuselage design consisting of two An-225 fuselages, with the OOS to be carried under the raised center wing. Multiple engine configurations were proposed, ranging from 18 Progress D-18T turbofans to as many as 40 engines, with placements both above and below the wings. An alternative design for the AKS was to use entirely new fuselages, each with a single tail. The AKS was deemed unfeasible, and no prototypes were ever built.
Specifications
| Technology | Specific aircraft_2 | null |
406260 | https://en.wikipedia.org/wiki/Environmental%20remediation | Environmental remediation | Environmental remediation is the cleanup of hazardous substances dealing with the removal, treatment and containment of pollution or contaminants from environmental media such as soil, groundwater, sediment. Remediation may be required by regulations before development of land revitalization projects. Developers who agree to voluntary cleanup may be offered incentives under state or municipal programs like New York State's Brownfield Cleanup Program. If remediation is done by removal the waste materials are simply transported off-site for disposal at another location. The waste material can also be contained by physical barriers like slurry walls. The use of slurry walls is well-established in the construction industry. The application of (low) pressure grouting, used to mitigate soil liquefaction risks in San Francisco and other earthquake zones, has achieved mixed results in field tests to create barriers, and site-specific results depend upon many variable conditions that can greatly impact outcomes.
Remedial action is generally subject to an array of regulatory requirements, and may also be based on assessments of human health and ecological risks where no legislative standards exist, or where standards are advisory.
Remediation standards
In the United States, the most comprehensive set of Preliminary Remediation Goals (PRGs) is from the Environmental Protection Agency (EPA) Regional Screening Levels (RSLs). A set of standards used in Europe exists and is often called the Dutch standards. The European Union (EU) is rapidly moving towards Europe-wide standards, although most of the industrialised nations in Europe have their own standards at present. In Canada, most standards for remediation are set by the provinces individually, but the Canadian Council of Ministers of the Environment provides guidance at a federal level in the form of the Canadian Environmental Quality Guidelines and the Canada-Wide Standards|Canada-Wide Standard for Petroleum Hydrocarbons in Soil.
Site assessment
Once a site is suspected of being contaminated there is a need to assess the contamination. Often the assessment begins with preparation of a Phase I Environmental Site Assessment. The historical use of the site and the materials used and produced on site will guide the assessment strategy and type of sampling and chemical analysis to be done. Often nearby sites owned by the same company or which are nearby and have been reclaimed, levelled or filled are also contaminated even where the current land use seems innocuous. For example, a car park may have been levelled by using contaminated waste in the fill. Also important is to consider off site contamination of nearby sites often through decades of emissions to soil, groundwater, and air. Ceiling dust, topsoil, surface and groundwater of nearby properties should also be tested, both before and after any remediation. This is a controversial step as:
No one wants to have to pay for the cleanup of the site;
If nearby properties are found to be contaminated it may have to be noted on their property title, potentially affecting the value;
No one wants to pay for the cost of assessment.
Often corporations which do voluntary testing of their sites are protected from the reports to environmental agencies becoming public under Freedom of Information Acts, however a "Freedom of Information" inquiry will often produce other documents that are not protected or will produce references to the reports.
Funding remediation
In the US there has been a mechanism for taxing polluting industries to form a Superfund to remediate abandoned sites, or to litigate to force corporations to remediate their contaminated sites. Other countries have other mechanisms and commonly sites are rezoned to "higher" uses such as high density housing, to give the land a higher value so that after deducting cleanup costs there is still an incentive for a developer to purchase the land, clean it up, redevelop it and sell it on, often as apartments (home units).
Mapping remediation
There are several tools for mapping these sites and which allow the user to view additional information. One such tool is TOXMAP, a Geographic Information System (GIS) from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) that uses maps of the United States to help users visually explore data from the United States Environmental Protection Agency's (EPA) Superfund and Toxics Release Inventory programs.
Technologies
Remediation technologies are many and varied but can generally be categorized into ex-situ and in-situ methods. Ex-situ methods involve excavation of affected soils and subsequent treatment at the surface as well as extraction of contaminated groundwater and treatment at the surface. In-situ methods seek to treat the contamination without removing the soils or groundwater. Various technologies have been developed for remediation of oil-contaminated soil/sediments.
Traditional remediation approaches consist of soil excavation and disposal to landfill and groundwater "pump and treat". In-situ technologies include but are not limited to: solidification and stabilization, soil vapor extraction, permeable reactive barriers, monitored natural attenuation, bioremediation-phytoremediation, chemical oxidation, steam-enhanced extraction and in situ thermal desorption and have been used extensively in the USA.
Barriers
Contaminants can be removed from a site or controlled. One option for control are barrier walls, which can be temporary to prevent contamination during treatment and removal, or more permanent. Techniques to construct barrier walls are deep soil mixing, jet grouting, low pressure grouting with cement and chemicals, freezing and slurry walls. Barrier walls must be constructed of impermeable materials and resistant to deterioration from contact with waste, for the lifespan of the barrier wall. It wasn't until the use of newer polymer and chemical grouts in the 1950s and 1960s that Federal agencies of the US government recognized the need to establish a minimum project life of 50 years in real world applications.
The Department of Energy is one US government agency that sponsors research to formulate, test and determine use applications for innovative polymer grouts used in waste containment barriers. Portland cement was used in the past, however cracking and poor performance under wet-dry conditions at arid sites need improved materials to remedy. Sites that need remediation have variable humidity, moisture and soil conditions. Field implementation remains challenging: different environmental and site conditions require different materials and the placement technologies are specific to the characteristics of the compounds used which vary in viscosity, gel time and density:
"The selection of subsurface barriers for any given site which needs remediation, and the selection of a particular barrier technology must be done, however, by means of the Superfund Process, with special emphasis on the remedial investigation and feasibility study portions. The chemical compatibility of the material with the wastes, leachates and geology with which it is likely to come in contact is of particular importance for barriers constructed from fluids which are supposed to set in-situ. EPA emphasizes this compatibility in its guidance documents, noting that thorough characterization of the waste, leachate, barrier material chemistry, site geochemistry, and compatibility testing of the barrier material with the likely disposal site chemical environment are all required."
These guidelines are for all materials - experimental and traditional.
Thermal desorption
Thermal desorption is a technology for soil remediation. During the process a desorber volatilizes the contaminants (e.g. oil, mercury or hydrocarbon) to separate them from especially soil or sludge. After that the contaminants can either be collected or destroyed in an offgas treatment system.
Excavation or dredging
Excavation processes can be as simple as hauling the contaminated soil to a regulated landfill, but can also involve aerating the excavated material in the case of volatile organic compounds (VOCs). Recent advancements in bioaugmentation and biostimulation of the excavated material have also proven to be able to remediate semi-volatile organic compounds (SVOCs) onsite. If the contamination affects a river or bay bottom, then dredging of bay mud or other silty clays containing contaminants (including sewage sludge with harmful microorganisms) may be conducted.
Recently, ExSitu Chemical oxidation has also been utilized in the remediation of contaminated soil. This process involves the excavation of the contaminated area into large bermed areas where they are treated using chemical oxidation methods.
Surfactant enhanced aquifer remediation (SEAR)
This is used in removing non-aqueous phase liquids (NAPLs) from aquifer. This is done by pumping surfactant solution into contaminated aquifer using injection wells which are passed through contaminated zones to the extraction wells. The Surfactant solution containing contaminants is then captured and pumped out by extraction wells for further treatment at the surface. Then the water after treatment is discharged into surface water or re-injected into groundwater.
In geologic formations that allow delivery of hydrocarbon mitigation agents or specialty surfactants, this approach provides a cost-effective and permanent solution to sites that have been previously unsuccessful utilizing other remedial approaches. This technology is also successful when utilized as the initial step in a multi-faceted remedial approach utilizing SEAR then In situ Oxidation, bioremediation enhancement or soil vapor extraction (SVE).
Pump and treat
Pump and treat involves pumping out contaminated groundwater with the use of a submersible or vacuum pump, and allowing the extracted groundwater to be purified by slowly proceeding through a series of vessels that contain materials designed to adsorb the contaminants from the groundwater. For petroleum-contaminated sites this material is usually activated carbon in granular form. Chemical reagents such as flocculants followed by sand filters may also be used to decrease the contamination of groundwater. Air stripping is a method that can be effective for volatile pollutants such as BTEX compounds found in gasoline.
For most biodegradable materials like BTEX, MTBE and most hydrocarbons, bioreactors can be used to clean the contaminated water to non-detectable levels. With fluidized bed bioreactors it is possible to achieve very low discharge concentrations which will meet or exceed discharge requirements for most pollutants.
Depending on geology and soil type, pump and treat may be a good method to quickly reduce high concentrations of pollutants. It is more difficult to reach sufficiently low concentrations to satisfy remediation standards, due to the equilibrium of absorption/desorption processes in the soil. However, pump and treat is typically not the best form of remediation. It is expensive to treat the groundwater, and typically is a very slow process to clean up a release with pump and treat. It is best suited to control the hydraulic gradient and keep a release from spreading further. Better options of in-situ treatment often include air sparge/soil vapor extraction (AS/SVE) or dual phase extraction/multiphase extraction (DPE/MPE). Other methods include trying to increase the dissolved oxygen content of the groundwater to support microbial degradation of the compound (especially petroleum) by direct injection of oxygen into the subsurface, or the direct injection of a slurry that slowly releases oxygen over time (typically magnesium peroxide or calcium oxy-hydroxide).
Solidification and stabilization
Solidification and stabilization work has a reasonably good track record but also a set of serious deficiencies related to durability of solutions and potential long-term effects. In addition CO2 emissions due to the use of cement are also becoming a major obstacle to its widespread use in solidification/stabilization projects.
Stabilization/solidification (S/S) is a remediation and treatment technology that relies on the reaction between a binder and soil to stop/prevent or reduce the mobility of contaminants.
Stabilization involves the addition of reagents to a contaminated material (e.g. soil or sludge) to produce more chemically stable constituents; and
Solidification involves the addition of reagents to a contaminated material to impart physical/dimensional stability to contain contaminants in a solid product and reduce access by external agents (e.g. air, rainfall).
Conventional S/S is an established remediation technology for contaminated soils and treatment technology for hazardous wastes in many countries in the world. However, the uptake of S/S technologies has been relatively modest, and a number of barriers have been identified including:
the relatively low cost and widespread use of disposal to landfill;
the lack of authoritative technical guidance on S/S;
uncertainty over the durability and rate of contaminant release from S/S-treated material;
experiences of past poor practice in the application of cement stabilization processes used in waste disposal in the 1980s and 1990s (ENDS, 1992); and
residual liability associated with immobilized contaminants remaining on-site, rather than their removal or destruction.
In situ oxidation
New in situ oxidation technologies have become popular for remediation of a wide range of soil and groundwater contaminants. Remediation by chemical oxidation involves the injection of strong oxidants such as hydrogen peroxide, ozone gas, potassium permanganate or persulfates.
Oxygen gas or ambient air can also be injected to promote growth of aerobic bacteria which accelerate natural attenuation of organic contaminants. One disadvantage of this approach is the possibility of decreasing anaerobic contaminant destruction natural attenuation where existing conditions enhance anaerobic bacteria which normally live in the soil prefer a reducing environment. In general, aerobic activity is much faster than anaerobic and overall destruction rates are typically greater when aerobic activity can be successfully promoted.
The injection of gases into the groundwater may also cause contamination to spread faster than normal depending on the hydrogeology of the site. In these cases, injections downgradient of groundwater flow may provide adequate microbial destruction of contaminants prior to exposure to surface waters or drinking water supply wells.
Migration of metal contaminants must also be considered whenever modifying subsurface oxidation-reduction potential. Certain metals are more soluble in oxidizing environments while others are more mobile in reducing environments.
Soil vapor extraction
Soil vapor extraction (SVE) is an effective remediation technology for soil. "Multi Phase Extraction" (MPE) is also an effective remediation technology when soil and groundwater are to be remediated coincidentally. SVE and MPE utilize different technologies to treat the off-gas volatile organic compounds (VOCs) generated after vacuum removal of air and vapors (and VOCs) from the subsurface and include granular activated carbon (most commonly used historically), thermal and/or catalytic oxidation and vapor condensation. Generally, carbon is used for low (below 500 ppmV) VOC concentration vapor streams, oxidation is used for moderate (up to 4,000 ppmV) VOC concentration streams, and vapor condensation is used for high (over 4,000 ppmV) VOC concentration vapor streams. Below is a brief summary of each technology.
Granular activated carbon (GAC) is used as a filter for air or water. Commonly used to filter tap water in household sinks. GAC is a highly porous adsorbent material, produced by heating organic matter, such as coal, wood and coconut shell, in the absence of air, which is then crushed into granules. Activated carbon is positively charged and therefore able to remove negative ions from the water such as organic ions, ozone, chlorine, fluorides and dissolved organic solutes by adsorption onto the activated carbon. The activated carbon must be replaced periodically as it may become saturated and unable to adsorb (i.e. reduced absorption efficiency with loading). Activated carbon is not effective in removing heavy metals.
Thermal oxidation (or incineration) can also be an effective remediation technology. This approach is somewhat controversial because of the risks of dioxins released in the atmosphere through the exhaust gases or effluent off-gas. Controlled, high temperature incineration with filtering of exhaust gases however should not pose any risks. Two different technologies can be employed to oxidize the contaminants of an extracted vapor stream. The selection of either thermal or catalytic depends on the type and concentration in parts per million by volume of constituent in the vapor stream. Thermal oxidation is more useful for higher concentration (~4,000 ppmV) influent vapor streams (which require less natural gas usage) than catalytic oxidation at ~2,000 ppmV.
Thermal oxidation which uses a system that acts as a furnace and maintains temperatures ranging from .
Catalytic oxidation which uses a catalyst on a support to facilitate a lower temperature oxidation. This system usually maintains temperatures ranging from .
Vapor condensation is the most effective off-gas treatment technology for high (over 4,000 ppmV) VOC concentration vapor streams. The process involves cryogenically cooling the vapor stream to below 40 degrees C such that the VOCs condensate out of the vapor stream and into liquid form where it is collected in steel containers. The liquid form of the VOCs is referred to as dense non-aqueous phase liquids (DNAPL) when the source of the liquid consists predominantly of solvents or light non-aqueous phase liquids (LNAPL) when the source of the liquid consists predominantly of petroleum or fuel products. This recovered chemical can then be reused or recycled in a more environmentally sustainable or green manner than the alternatives described above. This technology is also known as cryogenic cooling and compression (C3-Technology).
Nanoremediation
Using nano-sized reactive agents to degrade or immobilize contaminants is termed nanoremediation. In soil or groundwater nanoremediation, nanoparticles are brought into contact with the contaminant through either in situ injection or a pump-and-treat process. The nanomaterials then degrade organic contaminants through redox reactions or adsorb to and immobilize metals such as lead or arsenic. In commercial settings, this technology has been dominantly applied to groundwater remediation, with research into wastewater treatment. Research is also investigating how nanoparticles may be applied to cleanup of soil and gases.
Nanomaterials are highly reactive because of their high surface area per unit mass, and due to this reactivity nanomaterials may react with target contaminants at a faster rate than would larger particles. Most field applications of nanoremediation have used nano zero-valent iron (nZVI), which may be emulsified or mixed with another metal to enhance dispersion.
That nanoparticles are highly reactive can mean that they rapidly clump together or react with soil particles or other material in the environment, limiting their dispersal to target contaminants. Some of the important challenges currently limiting nanoremediation technologies include identifying coatings or other formulations that increase dispersal of the nanoparticle agents to better reach target contaminants while limiting any potential toxicity to bioremediation agents, wildlife, or people.
Bioremediation
Bioremediation is a process that treats a polluted area either by altering environmental conditions to stimulate growth of microorganisms or through natural microorganism activity, resulting in the degradation of the target pollutants. Broad categories of bioremediation include biostimulation, bioaugmentation, and natural recovery (natural attenuation). Bioremediation is either done on the contaminated site (in situ) or after the removal of contaminated soils at another more controlled site (ex situ).
In the past, it has been difficult to turn to bioremediation as an implemented policy solution, as lack of adequate production of remediating microbes led to little options for implementation. Those that manufacture microbes for bioremediation must be approved by the EPA; however, the EPA traditionally has been more cautious about negative externalities that may or may not arise from the introduction of these species. One of their concerns is that the toxic chemicals would lead to the microbe's gene degradation, which would then be passed on to other harmful bacteria, creating more issues, if the pathogens evolve the ability to feed off of pollutants.
Entomoremediation
Entomoremediation is a variant of bioremediation in which insects decontaminate soils. Entomoremediation techniques engage microorganisms, collembolans, ants, flies, beetles, and termites. It is dependent on saprophytic insect larvae, resistant to adverse environmental conditions and able to bioaccumulate toxic heavy metal contaminants.
Hermetia illucens (black soldier fly - BSF) is an important entomoremediation participant. H. illucens has been observed to reduce polluted substrate dry weight by 49%. H. illucens larvae have been observed to accumulate cadmium at a concentration of 93% and bioaccumulation factor of 5.6, lead, mercury, zinc with a bioaccumulation factor of 3.6, and arsenic at a concentration of 22%. Black soldier fly larvae (BSFL) have also been used to monitor the degradation and reduction of anthropogenic oil contamination in the environment.
Entomoremediation is considered viable as an accessible low-energy, low-carbon, and highly renewable method for environmental decontamination.
Collapsing air microbubbles
Cleaning of oil contaminated sediments with self collapsing air microbubbles have been recently explored as a chemical free technology. Air microbubbles generated in water without adding any surfactant could be used to clean oil contaminated sediments. This technology holds promise over the use of chemicals (mainly surfactant) for traditional washing of oil contaminated sediments.
Community consultation and information
In preparation for any significant remediation there should be extensive community consultation. The proponent should both present information to and seek information from the community. The proponent needs to learn about "sensitive" (future) uses like childcare, schools, hospitals, and playgrounds as well as community concerns and interests information. Consultation should be open, on a group basis so that each member of the community is informed about issues they may not have individually thought about. An independent chairperson acceptable to both the proponent and the community should be engaged (at proponent expense if a fee is required). Minutes of meetings including questions asked and the answers to them and copies of presentations by the proponent should be available both on the internet and at a local library (even a school library) or community centre.
Incremental health risk
Incremental health risk is the increased risk that a receptor (normally a human being living nearby) will face from (the lack of) a remediation project. The use of incremental health risk is based on carcinogenic and other (e.g., mutagenic, teratogenic) effects and often involves value judgements about the acceptable projected rate of increase in cancer. In some jurisdictions this is 1 in 1,000,000 but in other jurisdictions the acceptable projected rate of increase is 1 in 100,000. A relatively small incremental health risk from a single project is not of much comfort if the area already has a relatively high health risk from other operations like incinerators or other emissions, or if other projects exist at the same time causing a greater cumulative risk or an unacceptably high total risk. An analogy often used by remediators is to compare the risk of the remediation on nearby residents to the risks of death through car accidents or tobacco smoking.
Emissions standards
Standards are set for the levels of dust, noise, odour, emissions to air and groundwater, and discharge to sewers or waterways of all chemicals of concern or chemicals likely to be produced during the remediation by processing of the contaminants. These are compared against both natural background levels in the area and standards for areas zoned as nearby areas are zoned and against standards used in other recent remediations. Just because the emission is emanating from an area zoned industrial does not mean that in a nearby residential area there should be permitted any exceedances of the appropriate residential standards.
Monitoring for compliance against each standards is critical to ensure that exceedances are detected and reported both to authorities and the local community.
Enforcement is necessary to ensure that continued or significant breaches result in fines or even a jail sentence for the polluter.
Penalties must be significant as otherwise fines are treated as a normal expense of doing business. Compliance must be cheaper than to have continuous breaches.
Transport and emergency safety assessment
Assessment should be made of the risks of operations, transporting contaminated material, disposal of waste which may be contaminated including workers' clothes, and a formal emergency response plan should be developed. Every worker and visitor entering the site should have a safety induction personalised to their involvement with the site.
Impacts of funding remediation
Local communities and government often resist the rezoning because of the adverse effects of the remediation and new development on the local amenities. The main impacts during remediation are noise, dust, odour, and incremental health risk. Then there is the noise, dust, and traffic of developments. Then, there is the impact on local traffic, schools, playing fields, and other public facilities due to the increased population.
Examples of major remediation projects
Homebush Bay, New South Wales, Australia
Dioxins from Union Carbide used in the production of now-banned pesticide 2,4,5-Trichlorophenoxyacetic acid and defoliant Agent Orange polluted Homebush Bay. Remediation was completed in 2010, but fishing will continue to be banned for decades.
Bakar, Croatia
An EU contract for immobilization of a polluted area of 20,000 m3 in Bakar, Croatia based on solidification/stabilization with ImmoCem is currently in progress. After three years of intensive research by the Croatian government, the EU funded the immobilization project in Bakar. The area is contaminated with large amounts of TPH, PAH, and metals. For the immobilization, the contractor chose to use the mix-in-plant procedure.
| Technology | Environmental remediation | null |
406430 | https://en.wikipedia.org/wiki/Deposition%20%28geology%29 | Deposition (geology) | Deposition is the geological process in which sediments, soil and rocks are added to a landform or landmass. Wind, ice, water, and gravity transport previously weathered surface material, which, at the loss of enough kinetic energy in the fluid, is deposited, building up layers of sediment.
This occurs when the forces responsible for sediment transportation are no longer sufficient to overcome the forces of gravity and friction, creating a resistance to motion; this is known as the null-point hypothesis. Deposition can also refer to the buildup of sediment from organically derived matter or chemical processes. For example, chalk is made up partly of the microscopic calcium carbonate skeletons of marine plankton, the deposition of which induced chemical processes (diagenesis) to deposit further calcium carbonate. Similarly, the formation of coal begins with the deposition of organic material, mainly from plants, in anaerobic conditions.
Null-point hypothesis
The null-point hypothesis explains how sediment is deposited throughout a shore profile according to its grain size. This is due to the influence of hydraulic energy, resulting in a seaward-fining of sediment particle size, or where fluid forcing equals gravity for each grain size. The concept can also be explained as "sediment of a particular size may move across the profile to a position where it is in equilibrium with the wave and flows acting on that sediment grain". This sorting mechanism combines the influence of the down-slope gravitational force of the profile and forces due to flow asymmetry; the position where there is zero net transport is known as the null point and was first proposed by Cornaglia in 1889. Figure 1 illustrates this relationship between sediment grain size and the depth of the marine environment.
The first principle underlying the null point theory is due to the gravitational force; finer sediments remain in the water column for longer durations allowing transportation outside the surf zone to deposit under calmer conditions. The gravitational effect or settling velocity determines the location of deposition for finer sediments, whereas a grain's internal angle of friction determines the deposition of larger grains on a shore profile. The secondary principle to the creation of seaward sediment fining is known as the hypothesis of asymmetrical thresholds under waves; this describes the interaction between the oscillatory flow of waves and tides flowing over the wave ripple bedforms in an asymmetric pattern. "The relatively strong onshore stroke of the waveforms an eddy or vortex on the lee side of the ripple, provided the onshore flow persists, this eddy remains trapped in the lee of the ripple. When the flow reverses, the eddy is thrown upwards off the bottom and a small cloud of suspended sediment generated by the eddy is ejected into the water column above the ripple, the sediment cloud is then moved seaward by the offshore stroke of the wave." Where there is symmetry in ripple shape the vortex is neutralised, the eddy and its associated sediment cloud develops on both sides of the ripple. This creates a cloudy water column which travels under the tidal influence as the wave orbital motion is in equilibrium.
The Null-point hypothesis has been quantitatively proven in Akaroa Harbour, New Zealand, The Wash, U.K., Bohai Bay and West Huang Sera, Mainland China, and in numerous other studies; Ippen and Eagleson (1955), Eagleson and Dean (1959, 1961) and Miller and Zeigler (1958, 1964).
Deposition of non-cohesive sediments
Large-grain sediments transported by either bedload or suspended load will come to rest when there is insufficient bed shear stress and fluid turbulence to keep the sediment moving; with the suspended load this can be some distance as the particles need to fall through the water column. This is determined by the grain's downward acting weight force being matched by a combined buoyancy and fluid drag force and can be expressed by:
Downward acting weight force = Upward-acting buoyancy force + Upward-acting fluid drag force
where:
π is the ratio of a circle's circumference to its diameter.
R is the radius of the spherical object (in m),
ρ is the mass density of the fluid (kg/m3),
g is the gravitational acceleration (m/s2),
Cd is the drag coefficient, and
ws is the particle's settling velocity (in m/s).
In order to calculate the drag coefficient, the grain's Reynolds number needs to be discovered, which is based on the type of fluid through which the sediment particle is flowing, laminar flow, turbulent flow or a hybrid of both. When the fluid becomes more viscous due to smaller grain sizes or larger settling velocities, the prediction is less straightforward and it is applicable to incorporate Stokes Law (also known as the frictional force, or drag force) of settling.
Deposition of cohesive sediments
The cohesion of sediment occurs with the small grain sizes associated with silts and clays, or particles smaller than 4ϕ on the phi scale. If these fine particles remain dispersed in the water column, Stokes law applies to the settling velocity of the individual grains, although due to seawater being a strong electrolyte bonding agent, flocculation occurs where individual particles create an electrical bond adhering each other together to form flocs. "The face of a clay platelet has a slight negative charge where the edge has a slight positive charge when two platelets come into close proximity with each other the face of one particle and the edge of the other are electrostatically attracted." Flocs then have a higher combined mass which leads to quicker deposition through a higher fall velocity, and deposition in a more shoreward direction than they would have as the individual fine grains of clay or silt.
The occurrence of null point theory
Akaroa Harbour is located on Banks Peninsula, Canterbury, New Zealand, . The formation of this harbour has occurred due to active erosional processes on an extinct shield volcano, whereby the sea has flooded the caldera, creating an inlet 16 km in length, with an average width of 2 km and a depth of −13 m relative to mean sea level at the 9 km point down the transect of the central axis. The predominant storm wave energy has unlimited fetch for the outer harbour from a southerly direction, with a calmer environment within the inner harbour, though localised harbour breezes create surface currents and chop influencing the marine sedimentation processes. Deposits of loess from subsequent glacial periods have in filled volcanic fissures over millennia, resulting in volcanic basalt and loess as the main sediment types available for deposition in Akaroa Harbour
Hart et al. (2009) discovered through bathymetric survey, sieve and pipette analysis of subtidal sediments, that sediment textures were related to three main factors: depth, distance from shoreline, and distance along the central axis of the harbour. This resulted in the fining of sediment textures with increasing depth and towards the central axis of the harbour, or if classified into grain class sizes, "the plotted transect for the central axis goes from silty sands in the intertidal zone to sandy silts in the inner nearshore, to silts in the outer reaches of the bays to mud at depths of 6 m or more". See figure 2 for detail.
Other studies have shown this process of the winnowing of sediment grain size from the effect of hydrodynamic forcing; Wang, Collins and Zhu (1988) qualitatively correlated increasing intensity of fluid forcing with increasing grain size. "This correlation was demonstrated at the low energy clayey tidal flats of Bohai Bay (China), the moderate environment of the Jiangsu coast (China) where the bottom material is silty, and the sandy flats of the high energy coast of The Wash (U.K.)." This research shows conclusive evidence for the null point theory existing on tidal flats with differing hydrodynamic energy levels and also on flats that are both erosional and accretional.
Kirby R. (2002) takes this concept further explaining that the fines are suspended and reworked aerially offshore leaving behind lag deposits of the main bivalve and gastropod shells separated out from the finer substrate beneath, waves and currents then heap these deposits to form chenier ridges throughout the tidal zone, which tend to be forced up the foreshore profile but also along the foreshore. Cheniers can be found at any level on the foreshore and predominantly characterise an erosion-dominated regime.
Applications for coastal planning and management
The null point theory has been controversial in its acceptance into mainstream coastal science as the theory operates in dynamic equilibrium or unstable equilibrium, and many fields and laboratory observations have failed to replicate the state of a null point at each grain size throughout the profile. The interaction of variables and processes over time within the environmental context causes issues; "a large number of variables, the complexity of the processes, and the difficulty in observation, all place serious obstacles in the way of systematisation, therefore in certain narrow fields the basic physical theory may be sound and reliable but the gaps are large"
Geomorphologists, engineers, governments and planners should be aware of the processes and outcomes involved with the null point hypothesis when performing tasks such as beach nourishment, issuing building consents or building coastal defence structures. This is because sediment grain size analysis throughout a profile allows inference into the erosion or accretion rates possible if shore dynamics are modified. Planners and managers should also be aware that the coastal environment is dynamic and contextual science should be evaluated before the implementation of any shore profile modification. Thus theoretical studies, laboratory experiments, numerical and hydraulic modelling seek to answer questions pertaining to littoral drift and sediment deposition, the results should not be viewed in isolation and a substantial body of purely qualitative observational data should supplement any planning or management decision.
| Physical sciences | Sedimentology | Earth science |
406573 | https://en.wikipedia.org/wiki/Roscosmos | Roscosmos | The State Corporation for Space Activities "Roscosmos" (), commonly known simply as Roscosmos (), is a state corporation of the Russian Federation responsible for space flights, cosmonautics programs, and aerospace research.
Originating from the Soviet space program founded in the 1950s, Roscosmos emerged following the dissolution of the Soviet Union in 1991. It initially began as the Russian Space Agency, which was established on 25 February 1992 and restructured in 1999 and 2004 as the Russian Aviation and Space Agency and the Federal Space Agency (Roscosmos), respectively. In 2015, the Federal Space Agency (Roscosmos) was merged with the United Rocket and Space Corporation, a government corporation, to re-nationalize the space industry of Russia, leading to Roscosmos in its current form.
Roscosmos is headquartered in Moscow, with its main Mission Control Center in the nearby city of Korolyov, and the Yuri Gagarin Cosmonaut Training Center located in Star City in Moscow Oblast. Its launch facilities include Baikonur Cosmodrome in Kazakhstan, the world's first and largest spaceport, and Vostochny Cosmodrome, which is being built in the Russian Far East in Amur Oblast. Its director since July 2022 is Yury Borisov.
As the main successor to the Soviet space program, Roscosmos' legacy includes the world's first satellite, the first human spaceflight, and the first space station (Salyut). Its current activities include the International Space Station, wherein it is a major partner. On 22 February 2019, Roscosmos announced the construction of its new headquarters in Moscow, the National Space Centre. Its Astronaut Corps is the first in the world's history.
History
The Soviet space program did not have central executive agencies. Instead, its organizational architecture was multi-centered; it was the design bureaus and the council of designers that had the most say, not the political leadership. The creation of a central agency after the reorganization of the Soviet Union into the Russian Federation was therefore a new development. The Russian Space Agency was formed on 25 February 1992, by a decree of President Yeltsin. Yuri Koptev, who had previously worked with designing Mars landers at NPO Lavochkin, became the agency's first director.
In the early years, the agency suffered from lack of authority as the powerful design bureaus fought to protect their own spheres of operation and to survive. For example, the decision to keep Mir in operation beyond 1999 was not made by the agency, but by the private shareholder board of the Energia design bureau. Another example is that the decision to develop the new Angara rocket was rather a function of Khrunichev's ability to attract resources than a conscious long-term decision by the agency.
Crisis years
The 1990s saw serious financial problems due to the decreased cash flow, which encouraged the space agency to improvise and seek other ways to keep space programs running. This resulted in the agency's leading role in commercial satellite launches and space tourism. Scientific missions, such as interplanetary probes or astronomy missions during these years played a very small role, and although the agency had connections with the Russian aerospace forces, its budget was not part of Russia's defense budget; nevertheless, the agency managed to operate the Mir space station well past its planned lifespan, contributed to the International Space Station, and continued to fly Soyuz and Progress missions.
In 1994, Roscosmos renewed the lease on its Baikonur cosmodrome with the government of Kazakhstan.
2000: Start of ISS cooperation
On 31 October 2000, a Soyuz spacecraft lifted off from the Baikonur Cosmodrome at 10:53 a.m. Kazakhstan time. On board were Expedition One Commander William M. (Bill) Shepherd of NASA and cosmonauts Sergei Krikalev and Yuri Gidzenko of Roscosmos. The trio arrived at the International Space Station on 2 November, marking the start of an uninterrupted human presence on the orbiting laboratory.
2004–2006: Improved situation
In March 2004, the agency's director Yuri Koptev was replaced by Anatoly Perminov, who had previously served as the first commander of the Space Forces.
The Russian economy boomed throughout 2005 from high prices for exports, such as oil and gas, the outlook for future funding in 2006 appeared more favorable. This resulted in the Russian Duma approving a budget of 305 billion rubles (about US$11 billion) for the Space Agency from January 2006 until 2015, with overall space expenditures in Russia total about 425 billion rubles for the same time period. The budget for 2006 was as high as 25 billion rubles (about US$900 million), which is a 33% increase from the 2005 budget. Under the current 10-year budget approved, the budget of the Space Agency shall increase 5–10% per year, providing the space agency with a constant influx of money. In addition to the budget, Roscosmos plans to have over 130 billion rubles flowing into its budget by other means, such as industry investments and commercial space launches. It is around the time US-based The Planetary Society entered a partnership with Roscosmos.
New science missions: Koronas Foton (launched in January 2009, lost in April 2010), Spektr R (RadioAstron, launched in July 2011, retired in May 2019), Intergelizond (2011?), Spektr RG (Roentgen Gamma, launched 2019, one of two telescopes operational), Spektr UV (Ultra Violet, planned 2030), Spektr M (planned 2030), Celsta (2018?) and Terion (2018?)
Resumption of Bion missions with Bion-M (2013)
New weather satellites Elektro L (launched in January 2011) and Elektro P (2015)
2006–2012
The federal space budget for the year 2009 was left unchanged despite the global economic crisis, standing at about 82 billion rubles ($2.4 billion). In 2011, the government spent 115 billion rubles ($3.8 bln) in the national space programs.
The proposed project core budget for 2013 to be around 128.3 billion rubles. The budget for the whole space program is 169.8 billion rubles. ($5.6 bln).
By 2015, the amount of the budget can be increased to 199.2 billion rubles.
Priorities of the Russian space program include the new Angara rocket family and development of new communications, navigation and remote Earth sensing spacecraft. The GLONASS global navigation satellite system has for many years been one of the top priorities and has been given its own budget line in the federal space budget. In 2007, GLONASS received 9.9 billion rubles ($360 million), and under the terms of a directive signed by Prime Minister Vladimir Putin in 2008, an additional $2.6 billion will be allocated for its development.
Space station funding issues
Due to International Space Station involvements, up to 50% of Russia's space budget is spent on the crewed space program . Some observers have pointed out that this has a detrimental effect on other aspects of space exploration, and that the other space powers spend much lesser proportions of their overall budgets on maintaining human presence in orbit.
Despite the considerably improved budget, attention of legislative and executive authorities, positive media coverage and broad support among the population, the Russian space program continues to face several problems. Wages in the space industry are low; the average age of employees is high (46 years in 2007), and much of the equipment is obsolete. On the positive side, many companies in the sector have been able to profit from contracts and partnerships with foreign companies; several new systems such as new rocket upper stages have been developed in recent years; investments have been made to production lines, and companies have started to pay more attention to educating a new generation of engineers and technicians.
2011 New director
On 29 April 2011, Perminov was replaced with Vladimir Popovkin as the director of Roscosmos. The 65-year-old Perminov was over the legal age for state officials, and had received some criticism after a failed GLONASS launch in December 2010. Popovkin is a former commander of the Russian Space Forces and First Deputy Defense Minister of Russia.
2013–2016: Reorganization of the Russian space sector
As a result of a series of reliability problems, and proximate to the failure of a July 2013 Proton M launch, a major reorganization of the Russian space industry was undertaken. The United Rocket and Space Corporation was formed as a joint-stock corporation by the government in August 2013 to consolidate the Russian space sector. Deputy Prime Minister Dmitry Rogozin said "the failure-prone space sector is so troubled that it needs state supervision to overcome its problems."
Three days following the Proton M launch failure, the Russian government had announced that "extremely harsh measures" would be taken "and spell the end of the [Russian] space industry as we know it."
Information indicated then that the government intended to reorganize in such a way as to "preserve and enhance the Roscosmos space agency."
More detailed plans released in October 2013 called for a re-nationalization of the "troubled space industry", with sweeping reforms including a new "unified command structure and reducing redundant capabilities, acts that could lead to tens of thousands of layoffs." According to Rogozin, the Russian space sector employs about 250,000 people, while the United States needs only 70,000 to achieve similar results. He said: "Russian space productivity is eight times lower than America's, with companies duplicating one another's work and operating at about 40 percent efficiency."
Under the 2013 plan, Roscosmos was to "act as a federal executive body and contracting authority for programs to be implemented by the industry."
In 2016, the state agency was dissolved and the Roscosmos brand moved to the state corporation, which had been created in 2013 as the United Rocket and Space Corporation, with the specific mission to renationalize the Russian space sector.
2017–2021
In 2018, Russian President Vladimir Putin said "it 'is necessary to drastically improve the quality and reliability of space and launch vehicles' ... to preserve Russia's increasingly threatened leadership in space." In November 2018 Alexei Kudrin, head of Russian financial audit agency, named Roscosmos as the public enterprise with "the highest losses" due to "irrational spending" and outright theft and corruption, under the leadership of Igor Komarov who was terminated in May 2018 in favour of Rogozin.
In 2020 Roscosmos under Rogozin reneged on its participation in Lunar Gateway, a NASA-led project that will see a lunar orbiter spaceport for the moon. It had previously signed an agreement in September 2017 with the Americans.
In March 2021, Roscosmos signed a memorandum of cooperative construction of a lunar base called the International Lunar Research Station with the China National Space Administration.”
In April 2021, Roscosmos announced that it will be departing the ISS program after 2024. In its place, it was announced that a new space station (Russian Orbital Service Station) will be constructed starting in 2025.
In June 2021 Rogozin complained that sanctions imposed in the wake of the 2014 Russian annexation of Crimea were hurting Roscosmos.
In September 2021, Roscosmos announced its revenue and net income, losing 25 billion roubles and 1 billion roubles respectively in 2020, due to the reduction of profit from foreign contracts, an increase in show-up pay, stay-at-home days and personnel health expenses due to the COVID-19 pandemic. According to Roscosmos, these losses would also impact the corporation for the next two years. In October, Roscosmos placed the tests of rocket engines in the engineering bureau of chemical automatics in Voronezh on hold for one month to deliver 33 tons of oxygen to local medical centers, as part of aid for the COVID-19 pandemic.
In December 2021, the Government of Russia confirmed determination of the agreement with Roscosmos for development of next-gen space systems, the document been provided for the officials in July 2020.
2022-present
Since the Russian invasion of Ukraine on 24 February 2022, Roscosmos launched nine rockets in 2022 and 7 in the first half of 2023.
In early March 2022, Roscosmos under Rogozin suspended its participation in the ESA's Kourou, French Guiana spaceport in a tit-for-tat move over the sanctions imposed in the wake of the Russian invasion. As well Rogozin said he would suspend delivery of the RD-181 engine which is used for the Northrop Grumman Antares-Cygnus space cargo delivery system.
In late March 2022, the European Space Agency (ESA) suspended cooperation with Roscosmos in the ExoMars rover mission because of the Russian invasion, and British satellite venture OneWeb signed contracts with ISRO and SpaceX to launch its satellites after friction had developed "with Moscow" and Roscosmos, its previous orbit service provider. The friction had developed over Rogozin's command that OneWeb needed to ditch its venture capital investment from the UK government.
On 2 May 2022, Rogozin announced that Roscosmos would terminate its involvement in the ISS with 12 months' notice as stipulated in the international contract that governs the satellite. This followed the 3 March 2022 announcement that Roscosmos would cease cooperation on scientific experiments at the Spacelab, and the 25 March 2022 announcement by Rogozin that "cooperation with Europe is now impossible after sanctions over the Ukraine war."
Rogozin was removed from his job as CEO in July 2022, and replaced with Yury Borisov, who seemed to stabilize the relationship with the ISS partners, especially NASA. One complaint against Rogozin was his risky words about terminating the ISS agreement over the war in Ukraine, which he broadcast as early as April 2022. At one point in time NASA had bought 71 return trips on Soyuz for almost $4 billion over six years.
The global space-launch services market was valued at $12.4 billion in 2021 and was forecast to reach $38 billion by decade's end. An American academic wrote that in the wake of the Russian invasion, Roscosmos' share of that market was likely to decline in favour of new entrants such as Japan and India, as well as commercial entrants like SpaceX and Blue Origin.
In June 2023, Roscosmos held a campaign to recruit volunteers for the Uran Battalion, a militia for the Russian invasion of Ukraine.
In October 2023, Borisov announced the need for 150 billion rubles to build the Russian space station in the next three years. At completion in 2032, it will have absorbed 609 billion rubles.
In February 2024, at the 2023 AGM, Borisov announced the loss of 180 billion rubles in export revenues, chiefly engine sales and launch services, because of the Western hostility to the Russian invasion of Ukraine. Roscosmos had lost 90% of its launch service contracts since the advent of the war.
Roscosmos and Russia's space industry are facing significant challenges. The country is on track to conduct its fewest orbital launches since 1961. As of August 15, 2024, only nine launches had occurred, a sharp decline partly attributed to the loss of Western customers following Russia's invasion of Ukraine. Roscosmos has reported financial losses of 180 billion rubles ($2.1 billion) due to canceled contracts. The agency's first deputy director indicated it may not achieve profitability until 2025.
Future plans
From 2024 on Roscosmos headquarters will be located in the new National Space Center in the Moscow district of Fili.
Current programs
ISS involvement
Roscosmos is one of the partners in the International Space Station program. It contributed the core space modules Zarya and Zvezda, which were both launched by Proton rockets and later were joined by NASA's Unity Module. The Rassvet module was launched aboard and is primarily used for cargo storage and as a docking port for visiting spacecraft. The Nauka module is the final planned component of the ISS, launch was postponed several times from the initially planned date in 2007, but attached to ISS in July 2021.
Roscosmos is responsible for expedition crew launches by Soyuz-TMA spacecraft and resupplies the space station with Progress space transporters. After the initial ISS contract with NASA expired, Roscosmos and NASA, with the approval of the US government, entered into a space contract running until 2011, according to which Roscosmos will sell NASA spots on Soyuz spacecraft for approximately $21 million per person each way, thus $42 million to and back from the ISS per person, as well as provide Progress transport flights, at $50 million per Progress as outlined in the Exploration Systems Architecture Study. Roscosmos announced that according to this arrangement, crewed Soyuz flights would be doubled to 4 per year and Progress flights doubled to 8 per year beginning in 2008.
Roscosmos has provided space tourism for fare-paying passengers to ISS through the Space Adventures company. As of 2009, six space tourists have contracted with Roscosmos and have flown into space, each for an estimated fee of at least $20 million (USD).
Continued international collaboration in ISS missions has been thrown into doubt by the 2022 Russian invasion of Ukraine and related sanctions on Russia, although resupply missions continued in 2022 and 2023.
Scientific programs
Roscosmos operates a number of programs for Earth science, communication, and scientific research on the International Space Station. Roscosmos operates one science satellite (Spektr-RG) and no interplanetary probes, as of 2024. Future projects include the Soyuz successor, the Prospective Piloted Transport System, scientific robotic missions to one of the Mars moons as well as an increase in Lunar orbit research satellites to one (Luna-Glob).
Luna-Glob Moon orbiters and landers, Luna 25 launched in 2023 crashed onto the moon.
Venera-D Venus lander, planned for 2029
Fobos-Grunt Mars mission, lost in low Earth orbit in 2011 and crashed back to earth in 2012
Mars 96 Mars mission, lost in low Earth orbit in 1996
Rockets
Roscosmos uses a family of several launch rockets, the most famous of them being the R-7, commonly known as the Soyuz rocket that is capable of launching about 7.5 tons into low Earth orbit (LEO). The Proton rocket (or UR-500K) has a lift capacity of over 20 tons to LEO. Smaller rockets include Rokot and other Stations.
Currently rocket development encompasses both a new rocket system, Angara, as well as enhancements of the Soyuz rocket, Soyuz-2 and Soyuz-2-3. Two modifications of the Soyuz, the Soyuz-2.1a and Soyuz-2.1b have already been successfully tested, enhancing the launch capacity to 8.5 tons to LEO.
Operational
Under development
New piloted spacecraft
One of Roscosmos's projects that was widely covered in the media in 2005 was Kliper, a small lifting body reusable spacecraft. While Roscosmos had reached out to ESA and JAXA as well as others to share development costs of the project, it also stated that it will go forward with the project even without the support of other space agencies. This statement was backed by the approval of its budget for 2006–2015, which includes the necessary funding of Kliper. However, the Kliper program was cancelled in July 2006, and has been replaced by the new Orel project. , no crafts were launched.
Space systems
"Resurs-P" is a series of Russian commercial Earth observation satellites capable of acquiring high-resolution imagery (resolution up to 1.0 m). The spacecraft is operated by Roscosmos as a replacement of the Resurs-DK No.1 satellite.
Create HEO space system "Arctic" to address the hydrological and meteorological problems in the Arctic region and the northern areas of the Earth, with the help of two spacecraft "Arktika-M" and in the future within the system can create a communications satellite "Arktika-MS" and radar satellites "Arktika-R."
The launch of two satellites "Obzor-R" (Review-R) Remote Sensing of the Earth, with the AESA radar and four spacecraft "Obzor-O" (Review-O) to capture the Earth's surface in normal and infrared light in a broad swath of 80 km with a resolution of 10 meters. The first two satellites of the projects planned for launch in 2015.
Gonets: Civilian low Earth orbit communication satellite system. On 2016, the system consists of 13 satellites (12 Gonets-M and 1 Gonets-D1).
Suffa Space Observatory
In 2018, Russia agreed to help build the Suffa observatory in Uzbekistan. The observatory was started in 1991, but stalled after the fall of the USSR.
Gecko mating experiment
On 19 July 2014, Roscosmos launched the Foton-M4 satellite containing, among other animals and plants, a group of five geckos. The five geckos, four females and one male, were used as a part of the Gecko-F4 research program aimed at measuring the effects of weightlessness on the lizards' ability to procreate and develop in the harsh environment. However, soon after the spacecraft exited the atmosphere, mission control lost contact with the vessel which led to an attempt to reestablish communication that was only achieved later in the mission. When the satellite returned to Earth after its planned two-month mission had been cut short to 44 days, the space agency researchers reported that all the geckos had perished during the flight.
The exact cause that led to the deaths of the geckos was declared unknown by the scientific team in charge of the project. Reports from the Institute of Medical and Biological Problems in Russia have indicated that the lizards had been dead for at least a week prior to their return to Earth. A number of those connected to the mission have theorized that a failure in the vessel's heating system may have caused the cold blooded reptiles to freeze to death.
Included in the mission were a number of fruit flies, plants, and mushrooms which all survived the mission.
Launch control
The Russian Space Forces is the military counterpart of the Roscosmos with similar mission objectives as of the United States Space Force. The Russian branch was formed after the merging of the space components of the Russian Air Force and the Aerospace Defense Forces (VKO) in 2015. The Space Forces controls Russia's Plesetsk Cosmodrome launch facility. Roscosmos and the Space Forces share control of the Baikonur Cosmodrome, where Roscosmos reimburses the VKO for the wages of many of the flight controllers during civilian launches. Roscosmos and the Space Forces also share control of the Yuri Gagarin Cosmonaut Training Center. It has been announced that Russia is to build another spaceport in Tsiolkovsky, Amur Oblast. The Vostochny Cosmodrome was scheduled to be finished by 2018 having launched its first rocket in 2016.
Subsidiaries
As of 2017, Roscosmos had the following subsidiaries:
United Rocket and Space Corporation
Energia (38.2%)
Progress Rocket Space Centre
Yuri Gagarin Cosmonaut Training Center
NPO Energomash
NPO Lavochkin
Khrunichev State Research and Production Space Center
Strategicheskiye Punkty Upravleniya
Glavcosmos
Salavat Chemical Plant
Turbonasos
Moscow Institute of Thermal Technology
IPK Mashpribor
NPO Iskra
Makeyev Rocket Design Bureau
All-Russian Scientific Research Institute of Electromechanics
Information Satellite Systems Reshetnev
Russian Space Systems
Sistemy precizionnogo priborostroenia
Chemical Automatics Design Bureau
Proton-PM
Tekhnicheskiy Tsentr Novator
AO EKHO
NIIMP-K
TSKB Geofizika
Osoboye Konstruktorskoye Byuro Protivopozharnoy Tekhniki
Tsentralnoye Konstruktorskoye Byuro Transportnogo Mashinostroyeniya
NII komandnykh priborov
NPO Avtomatiki
Zlatoust Machine-Building Plant
Krasnoyarsk Machine-Building Plant
Miass Machine-Building Plant
Moskovskiy zavod elektromekhanicheskoy apparatury
Nauchno-issledovatelskiy Institut Elektromekhaniki
NPO Novator
PKP IRIS
NPP Geofizika-Kosmos
NPP Kvant
NPP Polyus
Ispytatelnyy tekhnicheskiy tsentr – NPO PM
NPO PM – Maloye Konstruktorskoye Byuro
NPO PM – Razvitiye
Sibpromproyekt
Scientific Research Institute of Precision Instruments
NIIFI
NPO Izmeritelnoy Tekhniki
OKB MEI
106 Experimental Optical and Mechanical Plant
OAO Bazalt
Nauchno-inzhenernyy tsentr elektrotekhnicheskogo universiteta
NPO Tekhnomash
Keldysh Research Center
Arsenal Design Bureau
MOKB Mars
NTTS Okhrana
NII Mashinostroyeniya
Scientific Production Association Of Automation And Instrument-Building
OKB Fakel
MNII Agat
TsNIIMash
Centre for Operation of Space Ground-based Infrastructure (TsENKI)
NTTS Zarya
NITs RKP
| Technology | Programs and launch sites | null |
3279786 | https://en.wikipedia.org/wiki/Comparison%20of%20the%20imperial%20and%20US%20customary%20measurement%20systems | Comparison of the imperial and US customary measurement systems | Both the British imperial measurement system and United States customary systems of measurement derive from earlier English unit systems used prior to 1824 that were the result of a combination of the local Anglo-Saxon units inherited from Germanic tribes and Roman units.
Having this shared heritage, the two systems are quite similar, but there are differences. The US customary system is based on English systems of the 18th century, while the imperial system was defined in 1824, almost a half-century after American independence.
Volume
Volume may be measured either in terms of units of cubic length or with specific volume units. The units of cubic length (the cubic inch, cubic foot, cubic mile, etc.) are the same in the imperial and US customary systems, but they differ in their specific units of volume (the bushel, gallon, fluid ounce, etc.). The US customary system has one set of units for fluids and another set for dry goods. The imperial system has only one set defined independently of, and subdivided differently from, its US counterparts.
By the end of the 18th century, various systems of volume measurement were in use throughout the British Empire. Wine was measured with units based on the wine gallon of 231 cubic inches (3.785 L), beer was measured with units based on an ale gallon of 282 cubic inches (4.621 L) and grain was measured with the Winchester measure with a gallon of approximately 268.8 cubic inches (one eighth of a Winchester bushel or 4.405 L). In 1824, these units were replaced with a single system based on the imperial gallon. Originally defined as the volume of of distilled water (under certain conditions), then redefined by the Weights and Measures Act 1985 to be exactly (277.4 cu in), the imperial gallon is close in size to the old ale gallon.
The Winchester measure was made obsolete in the British Empire but remained in use in the US. The Winchester bushel was replaced with an imperial bushel of eight imperial gallons. The subdivisions of the bushel were maintained. As with US dry measures, the imperial system divides the bushel into 4 pecks, 8 gallons, 32 quarts or 64 pints. Thus, all of these imperial measures are about 3% larger than are their US dry-measure counterparts.
Fluid measure is not as straightforward. The American colonists adopted a system based on the 231-cubic-inch wine gallon for all fluid purposes. This became the US fluid gallon. Both the imperial and US fluid gallon are divided into 4 quarts, 8 pints or 32 gills. However, whereas the US gill is divided into four US fluid ounces, the imperial gill is divided into five imperial fluid ounces. So whilst the imperial gallon, quart, pint and gill are about 20% larger than are their US fluid measure counterparts, the fluid ounce is about 4% smaller. One avoirdupois ounce of water has an approximate volume of one imperial fluid ounce at 62 °F (16.67 °C). This convenient fluid-ounce-to-avoirdupois-ounce relation does not exist in the US system.
One noticeable comparison between the imperial system and the US system is between some Canadian and American beer bottles. Many Canadian brewers package beer in a 12-imperial-fluid-ounce bottles, which are 341 mL each. American brewers package their beer in 12-US-fluid-ounce bottles, which are 355 mL each. As a result, Canadian bottles are labelled as 11.5 fl oz in US units when imported into the United States. Because the standard size of Canadian beer bottles predates the adoption of the metric system in Canada, the bottles are still sold and labelled in Canada as 341 mL. Canned beer in Canada is sold and labelled in 355 mL cans, and when exported to the US, they are labelled as 12 fl oz.
Length
The international yard is defined as exactly 0.9144 metres. This definition was approved by the United States, Canada, the United Kingdom, South Africa, Australia and New Zealand through the international yard and pound agreement of 1959, and corresponds with the previous 1930s British and American definitions of 1 inch being 25.4 mm. In all systems, a yard is 36 inches.
The US survey foot and survey mile were maintained as separate units for surveying purposes to avoid the accumulation of error that would follow replacing them with the international versions, particularly with State Plane Coordinate Systems. The choice of unit for surveying purposes is based on the unit used when the overall framework or geodetic datum for the region was established; for example, much of the former British empire still uses the Clarke foot for surveying.
The US survey foot is defined so that 1 metre is exactly 39.37 inches, making the international foot of 0.3048 metres exactly two parts per million shorter. This is a difference of just over 3.2 mm, or a little more than one-eighth of an inch per mile. According to the National Institute of Standards and Technology, the survey foot is obsolete as of 1 January 2023, and its use discouraged.
The main units of length (inch, foot, yard and international mile) were the same in the US, though the US rarely uses some of the intermediate units today, such as the (surveyor's) chain (22 yards) and the furlong (220 yards).
At one time, the definition of the nautical mile was based on the surface area of the Clarke ellipsoid. While the US used the full value of 1853.256 metres, in the British Commonwealth, this was rounded to 6080 feet (1853.184 m). These have been replaced by the international version (which rounds the 60th part of the 45° to the nearest metre) of 1852 metres.
Weight and mass
Traditionally, both Britain and the US used three different weight systems: troy weight for precious metals, apothecaries' weight for medicines and avoirdupois weight for almost all other purposes. However, apothecaries' weight has now been superseded by the metric system.
One important difference is the widespread use in Britain of the stone of 14 pounds () for body weight; this unit is not used in the United States, although flour was sold by a barrel of 196 pounds (14 stone) until World War II.
Another difference arose when Britain abolished the troy pound () on 1 January 1879, leaving only the troy ounce () and its decimal subdivisions, whereas the troy pound (of 12 troy ounces) and pennyweight are still legal in the United States, although they are no longer widely used.
In all of these systems, the fundamental unit is the pound (lb), and all other units are defined as fractions or multiples of a pound. The tables of imperial troy mass and apothecaries' mass are the same as the corresponding United States tables, except for the British spelling "drachm" in the table of apothecaries' mass. The table of imperial avoirdupois mass is the same as the United States table up to one pound, but above that point, the tables differ.
The imperial system has a hundredweight, defined as eight stone of 14 lb each, or 112 lb (), whereas a US hundredweight is 100 lb (). In both systems, 20 hundredweights make a ton. In the US, the terms long ton (, ) and short ton (; ) are used. The metric ton is the name used for the tonne (, ), which is about 1.6% less than the long ton.
The US customary system also includes the kip, equivalent to 1,000 pounds of force, which is also occasionally used as a unit of weight of 1,000 pounds (usually in engineering contexts).
| Physical sciences | Measurement systems | Basics and measurement |
3281166 | https://en.wikipedia.org/wiki/Thermodynamic%20process | Thermodynamic process | Classical thermodynamics considers three main kinds of thermodynamic processes: (1) changes in a system, (2) cycles in a system, and (3) flow processes.
(1) A Thermodynamic process is a process in which the thermodynamic state of a system is changed. A change in a system is defined by a passage from an initial to a final state of thermodynamic equilibrium. In classical thermodynamics, the actual course of the process is not the primary concern, and often is ignored. A state of thermodynamic equilibrium endures unchangingly unless it is interrupted by a thermodynamic operation that initiates a thermodynamic process. The equilibrium states are each respectively fully specified by a suitable set of thermodynamic state variables, that depend only on the current state of the system, not on the path taken by the processes that produce the state. In general, during the actual course of a thermodynamic process, the system may pass through physical states which are not describable as thermodynamic states, because they are far from internal thermodynamic equilibrium. Non-equilibrium thermodynamics, however, considers processes in which the states of the system are close to thermodynamic equilibrium, and aims to describe the continuous passage along the path, at definite rates of progress.
As a useful theoretical but not actually physically realizable limiting case, a process may be imagined to take place practically infinitely slowly or smoothly enough to allow it to be described by a continuous path of equilibrium thermodynamic states, when it is called a "quasi-static" process. This is a theoretical exercise in differential geometry, as opposed to a description of an actually possible physical process; in this idealized case, the calculation may be exact.
A really possible or actual thermodynamic process, considered closely, involves friction. This contrasts with theoretically idealized, imagined, or limiting, but not actually possible, quasi-static processes which may occur with a theoretical slowness that avoids friction. It also contrasts with idealized frictionless processes in the surroundings, which may be thought of as including 'purely mechanical systems'; this difference comes close to defining a thermodynamic process.
(2) A cyclic process carries the system through a cycle of stages, starting and being completed in some particular state. The descriptions of the staged states of the system are not the primary concern. The primary concern is the sums of matter and energy inputs and outputs to the cycle. Cyclic processes were important conceptual devices in the early days of thermodynamical investigation, while the concept of the thermodynamic state variable was being developed.
(3) Defined by flows through a system, a flow process is a steady state of flows into and out of a vessel with definite wall properties. The internal state of the vessel contents is not the primary concern. The quantities of primary concern describe the states of the inflow and the outflow materials, and, on the side, the transfers of heat, work, and kinetic and potential energies for the vessel. Flow processes are of interest in engineering.
Kinds of process
Cyclic process
Defined by a cycle of transfers into and out of a system, a cyclic process is described by the quantities transferred in the several stages of the cycle. The descriptions of the staged states of the system may be of little or even no interest. A cycle is a sequence of a small number of thermodynamic processes that indefinitely often, repeatedly returns the system to its original state. For this, the staged states themselves are not necessarily described, because it is the transfers that are of interest. It is reasoned that if the cycle can be repeated indefinitely often, then it can be assumed that the states are recurrently unchanged. The condition of the system during the several staged processes may be of even less interest than is the precise nature of the recurrent states. If, however, the several staged processes are idealized and quasi-static, then the cycle is described by a path through a continuous progression of equilibrium states.
Flow process
Defined by flows through a system, a flow process is a steady state of flow into and out of a vessel with definite wall properties. The internal state of the vessel contents is not the primary concern. The quantities of primary concern describe the states of the inflow and the outflow materials, and, on the side, the transfers of heat, work, and kinetic and potential energies for the vessel. The states of the inflow and outflow materials consist of their internal states, and of their kinetic and potential energies as whole bodies. Very often, the quantities that describe the internal states of the input and output materials are estimated on the assumption that they are bodies in their own states of internal thermodynamic equilibrium. Because rapid reactions are permitted, the thermodynamic treatment may be approximate, not exact.
A cycle of quasi-static processes
A quasi-static thermodynamic process can be visualized by graphically plotting the path of idealized changes to the system's state variables. In the example, a cycle consisting of four quasi-static processes is shown. Each process has a well-defined start and end point in the pressure-volume state space. In this particular example, processes 1 and 3 are isothermal, whereas processes 2 and 4 are isochoric. The PV diagram is a particularly useful visualization of a quasi-static process, because the area under the curve of a process is the amount of work done by the system during that process. Thus work is considered to be a process variable, as its exact value depends on the particular path taken between the start and end points of the process. Similarly, heat may be transferred during a process, and it too is a process variable.
Conjugate variable processes
It is often useful to group processes into pairs, in which each variable held constant is one member of a conjugate pair.
Pressure – volume
The pressure–volume conjugate pair is concerned with the transfer of mechanical energy as the result of work.
An isobaric process occurs at constant pressure. An example would be to have a movable piston in a cylinder, so that the pressure inside the cylinder is always at atmospheric pressure, although it is separated from the atmosphere. In other words, the system is dynamically connected, by a movable boundary, to a constant-pressure reservoir.
An isochoric process is one in which the volume is held constant, with the result that the mechanical PV work done by the system will be zero. On the other hand, work can be done isochorically on the system, for example by a shaft that drives a rotary paddle located inside the system. It follows that, for the simple system of one deformation variable, any heat energy transferred to the system externally will be absorbed as internal energy. An isochoric process is also known as an isometric process or an isovolumetric process. An example would be to place a closed tin can of material into a fire. To a first approximation, the can will not expand, and the only change will be that the contents gain internal energy, evidenced by increase in temperature and pressure. Mathematically, . The system is dynamically insulated, by a rigid boundary, from the environment.
Temperature – entropy
The temperature-entropy conjugate pair is concerned with the transfer of energy, especially for a closed system.
An isothermal process occurs at a constant temperature. An example would be a closed system immersed in and thermally connected with a large constant-temperature bath. Energy gained by the system, through work done on it, is lost to the bath, so that its temperature remains constant.
An adiabatic process is a process in which there is no matter or heat transfer, because a thermally insulating wall separates the system from its surroundings. For the process to be natural, either (a) work must be done on the system at a finite rate, so that the internal energy of the system increases; the entropy of the system increases even though it is thermally insulated; or (b) the system must do work on the surroundings, which then suffer increase of entropy, as well as gaining energy from the system.
An isentropic process is customarily defined as an idealized quasi-static reversible adiabatic process, of transfer of energy as work. Otherwise, for a constant-entropy process, if work is done irreversibly, heat transfer is necessary, so that the process is not adiabatic, and an accurate artificial control mechanism is necessary; such is therefore not an ordinary natural thermodynamic process.
Chemical potential - particle number
The processes just above have assumed that the boundaries are also impermeable to particles. Otherwise, we may assume boundaries that are rigid, but are permeable to one or more types of particle. Similar considerations then hold for the chemical potential–particle number conjugate pair, which is concerned with the transfer of energy via this transfer of particles.
In a constant chemical potential process the system is particle-transfer connected, by a particle-permeable boundary, to a constant-μ reservoir.
The conjugate here is a constant particle number process. These are the processes outlined just above. There is no energy added or subtracted from the system by particle transfer. The system is particle-transfer-insulated from its environment by a boundary that is impermeable to particles, but permissive of transfers of energy as work or heat. These processes are the ones by which thermodynamic work and heat are defined, and for them, the system is said to be closed.
Thermodynamic potentials
Any of the thermodynamic potentials may be held constant during a process. For example:
An isenthalpic process introduces no change in enthalpy in the system.
Polytropic processes
A polytropic process is a thermodynamic process that obeys the relation:
where P is the pressure, V is volume, n is any real number (the "polytropic index"), and C is a constant. This equation can be used to accurately characterize processes of certain systems, notably the compression or expansion of a gas, but in some cases, liquids and solids.
Processes classified by the second law of thermodynamics
According to Planck, one may think of three main classes of thermodynamic process: natural, fictively reversible, and impossible or unnatural.
Natural process
Only natural processes occur in nature. For thermodynamics, a natural process is a transfer between systems that increases the sum of their entropies, and is irreversible. Natural processes may occur spontaneously upon the removal of a constraint, or upon some other thermodynamic operation, or may be triggered in a metastable or unstable system, as for example in the condensation of a supersaturated vapour. Planck emphasised the occurrence of friction as an important characteristic of natural thermodynamic processes that involve transfer of matter or energy between system and surroundings.
Effectively reversible process
To describe the geometry of graphical surfaces that illustrate equilibrium relations between thermodynamic functions of state, no one can fictively think of so-called "reversible processes". They are convenient theoretical objects that trace paths across graphical surfaces. They are called "processes" but do not describe naturally occurring processes, which are always irreversible. Because the points on the paths are points of thermodynamic equilibrium, it is customary to think of the "processes" described by the paths as fictively "reversible". Reversible processes are always quasistatic processes, but the converse is not always true.
Unnatural process
Unnatural processes are logically conceivable but do not occur in nature. They would decrease the sum of the entropies if they occurred.
Quasistatic process
A quasistatic process is an idealized or fictive model of a thermodynamic "process" considered in theoretical studies. It does not occur in physical reality. It may be imagined as happening infinitely slowly so that the system passes through a continuum of states that are infinitesimally close to equilibrium.
| Physical sciences | Thermodynamics | Physics |
3285197 | https://en.wikipedia.org/wiki/Thermodynamic%20cycle | Thermodynamic cycle | A thermodynamic cycle consists of linked sequences of thermodynamic processes that involve transfer of heat and work into and out of the system, while varying pressure, temperature, and other state variables within the system, and that eventually returns the system to its initial state. In the process of passing through a cycle, the working fluid (system) may convert heat from a warm source into useful work, and dispose of the remaining heat to a cold sink, thereby acting as a heat engine. Conversely, the cycle may be reversed and use work to move heat from a cold source and transfer it to a warm sink thereby acting as a heat pump. If at every point in the cycle the system is in thermodynamic equilibrium, the cycle is reversible. Whether carried out reversible or irreversibly, the net entropy change of the system is zero, as entropy is a state function.
During a closed cycle, the system returns to its original thermodynamic state of temperature and pressure. Process quantities (or path quantities), such as heat and work are process dependent. For a cycle for which the system returns to its initial state the first law of thermodynamics applies:
The above states that there is no change of the internal energy () of the system over the cycle. represents the total work and heat input during the cycle and would be the total work and heat output during the cycle. The repeating nature of the process path allows for continuous operation, making the cycle an important concept in thermodynamics. Thermodynamic cycles are often represented mathematically as quasistatic processes in the modeling of the workings of an actual device.
Heat and work
Two primary classes of thermodynamic cycles are power cycles and heat pump cycles. Power cycles are cycles which convert some heat input into a mechanical work output, while heat pump cycles transfer heat from low to high temperatures by using mechanical work as the input. Cycles composed entirely of quasistatic processes can operate as power or heat pump cycles by controlling the process direction. On a pressure–volume (PV) diagram or temperature–entropy diagram, the clockwise and counterclockwise directions indicate power and heat pump cycles, respectively.
Relationship to work
Because the net variation in state properties during a thermodynamic cycle is zero, it forms a closed loop on a P-V diagram. A P-V diagram's abscissa, Y axis, shows pressure (P) and ordinate, X axis, shows volume (V). The area enclosed by the loop is the net work () done by the processes, i.e. the cycle:
This work is equal to the net heat (Q) transferred into and out of the system:
Equation (2) is consistent with the First Law; even though the internal energy changes during the course of the cyclic process, when the cyclic process finishes the system's internal energy is the same as the energy it had when the process began.
If the cyclic process moves clockwise around the loop, then will be positive, the cyclic machine will transform part of the heat exchanged into work and it represents a heat engine. If it moves counterclockwise, then will be negative, the cyclic machine will require work to absorb heat at a low temperature and reject it at a higher temperature and it represents a heat pump.
A list of thermodynamic processes
The following processes are often used to describe different stages of a thermodynamic cycle:
Adiabatic : No energy transfer as heat () during that part of the cycle (). Energy transfer is considered as work done by the system only.
Isothermal : The process is at a constant temperature during that part of the cycle (, ). Energy transfer is considered as heat removed from or work done by the system.
Isobaric : Pressure in that part of the cycle will remain constant. (, ). Energy transfer is considered as heat removed from or work done by the system.
Isochoric : The process is constant volume (, ). Energy transfer is considered as heat removed from the system, as the work done by the system is zero.
Isentropic : The process is one of constant entropy (, ). It is adiabatic (no heat nor mass exchange) and reversible.
Isenthalpic : The process that proceeds without any change in enthalpy or specific enthalpy.
Polytropic : The process that obeys the relation .
Reversible : The process where the net entropy production is zero; .
Example: The Otto cycle
The Otto Cycle is an example of a reversible thermodynamic cycle.
1→2: Isentropic / adiabatic expansion: Constant entropy (s), Decrease in pressure (P), Increase in volume (v), Decrease in temperature (T)
2→3: Isochoric cooling: Constant volume(v), Decrease in pressure (P), Decrease in entropy (S), Decrease in temperature (T)
3→4: Isentropic / adiabatic compression: Constant entropy (s), Increase in pressure (P), Decrease in volume (v), Increase in temperature (T)
4→1: Isochoric heating: Constant volume (v), Increase in pressure (P), Increase in entropy (S), Increase in temperature (T)
Power cycles
Thermodynamic power cycles are the basis for the operation of heat engines, which supply most of the world's electric power and run the vast majority of motor vehicles. Power cycles can be organized into two categories: real cycles and ideal cycles. Cycles encountered in real world devices (real cycles) are difficult to analyze because of the presence of complicating effects (friction), and the absence of sufficient time for the establishment of equilibrium conditions. For the purpose of analysis and design, idealized models (ideal cycles) are created; these ideal models allow engineers to study the effects of major parameters that dominate the cycle without having to spend significant time working out intricate details present in the real cycle model.
Power cycles can also be divided according to the type of heat engine they seek to model. The most common cycles used to model internal combustion engines are the Otto cycle, which models gasoline engines, and the Diesel cycle, which models diesel engines. Cycles that model external combustion engines include the Brayton cycle, which models gas turbines, the Rankine cycle, which models steam turbines, the Stirling cycle, which models hot air engines, and the Ericsson cycle, which also models hot air engines.
For example :--the pressure-volume mechanical work output from the ideal Stirling cycle (net work out), consisting of 4 thermodynamic processes, is:
For the ideal Stirling cycle, no volume change happens in process 4-1 and 2-3, thus equation (3) simplifies to:
Heat pump cycles
Thermodynamic heat pump cycles are the models for household heat pumps and refrigerators. There is no difference between the two except the purpose of the refrigerator is to cool a very small space while the household heat pump is intended to warm or cool a house. Both work by moving heat from a cold space to a warm space. The most common refrigeration cycle is the vapor compression cycle, which models systems using refrigerants that change phase. The absorption refrigeration cycle is an alternative that absorbs the refrigerant in a liquid solution rather than evaporating it. Gas refrigeration cycles include the reversed Brayton cycle and the Hampson–Linde cycle. Multiple compression and expansion cycles allow gas refrigeration systems to liquify gases.
Modeling real systems
Thermodynamic cycles may be used to model real devices and systems, typically by making a series of assumptions to reduce the problem to a more manageable form. For example, as shown in the figure, devices such a gas turbine or jet engine can be modeled as a Brayton cycle. The actual device is made up of a series of stages, each of which is itself modeled as an idealized thermodynamic process. Although each stage which acts on the working fluid is a complex real device, they may be modelled as idealized processes which approximate their real behavior. If energy is added by means other than combustion, then a further assumption is that the exhaust gases would be passed from the exhaust to a heat exchanger that would sink the waste heat to the environment and the working gas would be reused at the inlet stage.
The difference between an idealized cycle and actual performance may be significant. For example, the following images illustrate the differences in work output predicted by an ideal Stirling cycle and the actual performance of a Stirling engine:
As the net work output for a cycle is represented by the interior of the cycle, there is a significant difference between the predicted work output of the ideal cycle and the actual work output shown by a real engine. It may also be observed that the real individual processes diverge from their idealized counterparts; e.g., isochoric expansion (process 1-2) occurs with some actual volume change.
Well-known thermodynamic cycles
In practice, simple idealized thermodynamic cycles are usually made out of four thermodynamic processes. Any thermodynamic processes may be used. However, when idealized cycles are modeled, often processes where one state variable is kept constant, such as:
adiabatic (constant heat)
isothermal (constant temperature)
isobaric (constant pressure)
isochoric (constant volume)
isentropic (constant entropy)
isenthalpic (constant enthalpy)
Some example thermodynamic cycles and their constituent processes are as follows:
Ideal cycle
An ideal cycle is simple to analyze and consists of:
TOP (A) and BOTTOM (C) of the loop: a pair of parallel isobaric processes
RIGHT (B) and LEFT (D) of the loop: a pair of parallel isochoric processes
If the working substance is a perfect gas, is only a function of for a closed system since its internal pressure vanishes. Therefore, the internal energy changes of a perfect gas undergoing various processes connecting initial state to final state are always given by the formula
Assuming that is constant, for any process undergone by a perfect gas.
Under this set of assumptions, for processes A and C we have and , whereas for processes B and D we have and .
The total work done per cycle is , which is just the area of the rectangle. If the total heat flow per cycle is required, this is easily obtained. Since , we have .
Thus, the total heat flow per cycle is calculated without knowing the heat capacities and temperature changes for each step (although this information would be needed to assess the thermodynamic efficiency of the cycle).
Carnot cycle
The Carnot cycle is a cycle composed of the totally reversible processes of isentropic compression and expansion and isothermal heat addition and rejection. The thermal efficiency of a Carnot cycle depends only on the absolute temperatures of the two reservoirs in which heat transfer takes place, and for a power cycle is:
where is the lowest cycle temperature and the highest. For Carnot power cycles the coefficient of performance for a heat pump is:
and for a refrigerator the coefficient of performance is:
The second law of thermodynamics limits the efficiency and COP for all cyclic devices to levels at or below the Carnot efficiency. The Stirling cycle and Ericsson cycle are two other reversible cycles that use regeneration to obtain isothermal heat transfer.
Stirling cycle
A Stirling cycle is like an Otto cycle, except that the adiabats are replaced by isotherms. It is also the same as an Ericsson cycle with the isobaric processes substituted for constant volume processes.
TOP and BOTTOM of the loop: a pair of quasi-parallel isothermal processes
LEFT and RIGHT sides of the loop: a pair of parallel isochoric processes
Heat flows into the loop through the top isotherm and the left isochore, and some of this heat flows back out through the bottom isotherm and the right isochore, but most of the heat flow is through the pair of isotherms. This makes sense since all the work done by the cycle is done by the pair of isothermal processes, which are described by Q=W. This suggests that all the net heat comes in through the top isotherm. In fact, all of the heat which comes in through the left isochore comes out through the right isochore: since the top isotherm is all at the same warmer temperature and the bottom isotherm is all at the same cooler temperature , and since change in energy for an isochore is proportional to change in temperature, then all of the heat coming in through the left isochore is cancelled out exactly by the heat going out the right isochore.
State functions and entropy
If Z is a state function then the balance of Z remains unchanged during a cyclic process:
.
Entropy is a state function and is defined in an absolute sense through the Third Law of Thermodynamics as
where a reversible path is chosen from absolute zero to the final state, so that for an isothermal reversible process
.
In general, for any cyclic process the state points can be connected by reversible paths, so that
meaning that the net entropy change of the working fluid over a cycle is zero.
| Physical sciences | Thermodynamics | null |
2393975 | https://en.wikipedia.org/wiki/Yang%E2%80%93Mills%20existence%20and%20mass%20gap | Yang–Mills existence and mass gap | The Yang–Mills existence and mass gap problem is an unsolved problem in mathematical physics and mathematics, and one of the seven Millennium Prize Problems defined by the Clay Mathematics Institute, which has offered a prize of US$1,000,000 for its solution.
The problem is phrased as follows:
Yang–Mills Existence and Mass Gap. Prove that for any compact simple gauge group G, a non-trivial quantum Yang–Mills theory exists on and has a mass gap Δ > 0. Existence includes establishing axiomatic properties at least as strong as those cited in , and .
In this statement, a quantum Yang–Mills theory is a non-abelian quantum field theory similar to that underlying the Standard Model of particle physics; is Euclidean 4-space; the mass gap Δ is the mass of the least massive particle predicted by the theory.
Therefore, the winner must prove that:
Yang–Mills theory exists and satisfies the standard of rigor that characterizes contemporary mathematical physics, in particular constructive quantum field theory, and
The mass of all particles of the force field predicted by the theory are strictly positive.
For example, in the case of G=SU(3)—the strong nuclear interaction—the winner must prove that glueballs have a lower mass bound, and thus cannot be arbitrarily light.
The general problem of determining the presence of a spectral gap in a system is known to be undecidable.
Background
The problem requires the construction of a QFT satisfying the Wightman axioms and showing the existence of a mass gap. Both of these topics are described in sections below.
The Wightman axioms
The Millennium problem requires the proposed Yang–Mills theory to satisfy the Wightman axioms or similarly stringent axioms. There are four axioms:
W0 (assumptions of relativistic quantum mechanics)
Quantum mechanics is described according to von Neumann; in particular, the pure states are given by the rays, i.e. the one-dimensional subspaces, of some separable complex Hilbert space.
The Wightman axioms require that the Poincaré group acts unitarily on the Hilbert space. In other words, they have position dependent operators called quantum fields which form covariant representations of the Poincaré group.
The group of space-time translations is commutative, and so the operators can be simultaneously diagonalised. The generators of these groups give us four self-adjoint operators, , which transform under the homogeneous group as a four-vector, called the energy-momentum four-vector.
The second part of the zeroth axiom of Wightman is that the representation U(a, A) fulfills the spectral condition—that the simultaneous spectrum of energy-momentum is contained in the forward cone:
The third part of the axiom is that there is a unique state, represented by a ray in the Hilbert space, which is invariant under the action of the Poincaré group. It is called a vacuum.
W1 (assumptions on the domain and continuity of the field)
For each test function f, there exists a set of operators which, together with their adjoints, are defined on a dense subset of the Hilbert state space, containing the vacuum. The fields A are operator-valued tempered distributions. The Hilbert state space is spanned by the field polynomials acting on the vacuum (cyclicity condition).
W2 (transformation law of the field)
The fields are covariant under the action of Poincaré group, and they transform according to some representation S of the Lorentz group, or SL(2,C) if the spin is not integer:
W3 (local commutativity or microscopic causality)
If the supports of two fields are space-like separated, then the fields either commute or anticommute.
Cyclicity of a vacuum, and uniqueness of a vacuum are sometimes considered separately. Also, there is the property of asymptotic completeness—that the Hilbert state space is spanned by the asymptotic spaces and , appearing in the collision S matrix. The other important property of field theory is the mass gap which is not required by the axioms—that the energy-momentum spectrum has a gap between zero and some positive number.
Mass gap
In quantum field theory, the mass gap is the difference in energy between the vacuum and the next lowest energy state. The energy of the vacuum is zero by definition, and assuming that all energy states can be thought of as particles in plane-waves, the mass gap is the mass of the lightest particle.
For a given real field , we can say that the theory has a mass gap if the two-point function has the property
with being the lowest energy value in the spectrum of the Hamiltonian and thus the mass gap. This quantity, easy to generalize to other fields, is what is generally measured in lattice computations. It was proved in this way that Yang–Mills theory develops a mass gap on a lattice.
Importance of Yang–Mills theory
Most known and nontrivial (i.e. interacting) quantum field theories in 4 dimensions are effective field theories with a cutoff scale. Since the beta function is positive for most models, it appears that most such models have a Landau pole as it is not at all clear whether or not they have nontrivial UV fixed points. This means that if such a QFT is well-defined at all scales, as it has to be to satisfy the axioms of axiomatic quantum field theory, it would have to be trivial (i.e. a free field theory).
Quantum Yang–Mills theory with a non-abelian gauge group and no quarks is an exception, because asymptotic freedom characterizes this theory, meaning that it has a trivial UV fixed point. Hence it is the simplest nontrivial constructive QFT in 4 dimensions. (QCD is a more complicated theory because it involves quarks.)
Quark confinement
At the level of rigor of theoretical physics, it has been well established that the quantum Yang–Mills theory for a non-abelian Lie group exhibits a property known as confinement; though proper mathematical physics has more demanding requirements on a proof. A consequence of this property is that above the confinement scale, the color charges are connected by chromodynamic flux tubes leading to a linear potential between the charges. Hence isolated color charge and isolated gluons cannot exist. In the absence of confinement, we would expect to see massless gluons, but since they are confined, all we would see are color-neutral bound states of gluons, called glueballs. If glueballs exist, they are massive, which is why a mass gap is expected.
| Physical sciences | Particle physics: General | Physics |
2393984 | https://en.wikipedia.org/wiki/Liquid%E2%80%93liquid%20extraction | Liquid–liquid extraction | Liquid–liquid extraction, also known as solvent extraction and partitioning, is a method to separate compounds or metal complexes, based on their relative solubilities in two different immiscible liquids, usually water (polar) and an organic solvent (non-polar). There is a net transfer of one or more species from one liquid into another liquid phase, generally from aqueous to organic. The transfer is driven by chemical potential, i.e. once the transfer is complete, the overall system of chemical components that make up the solutes and the solvents are in a more stable configuration (lower free energy). The solvent that is enriched in solute(s) is called extract. The feed solution that is depleted in solute(s) is called the raffinate. Liquid–liquid extraction is a basic technique in chemical laboratories, where it is performed using a variety of apparatus, from separatory funnels to countercurrent distribution equipment called as mixer settlers. This type of process is commonly performed after a chemical reaction as part of the work-up, often including an acidic work-up.
The term partitioning is commonly used to refer to the underlying chemical and physical processes involved in liquid–liquid extraction, but on another reading may be fully synonymous with it. The term solvent extraction can also refer to the separation of a substance from a mixture by preferentially dissolving that substance in a suitable solvent. In that case, a soluble compound is separated from an insoluble compound or a complex matrix.
From a hydrometallurgical perspective, solvent extraction is exclusively used in separation and purification of uranium and plutonium, zirconium and hafnium, separation of cobalt and nickel, separation and purification of rare earth elements etc., its greatest advantage being its ability to selectively separate out even very similar metals. One obtains high-purity single metal streams on 'stripping' out the metal value from the 'loaded' organic wherein one can precipitate or deposit the metal value. Stripping is the opposite of extraction: Transfer of mass from organic to aqueous phase.
Liquid–liquid extraction is also widely used in the production of fine organic compounds, the processing of perfumes, the production of vegetable oils and biodiesel, and other industries. It is among the most common initial separation techniques, though some difficulties result in extracting out closely related functional groups.
Liquid-Liquid extraction can be substantially accelerated in microfluidic devices, reducing extraction and separation times from minutes/hours to mere seconds compared to conventional extractors.
Liquid–liquid extraction is possible in non-aqueous systems: In a system consisting of a molten metal in contact with molten salts, metals can be extracted from one phase to the other. This is related to a mercury electrode where a metal can be reduced, the metal will often then dissolve in the mercury to form an amalgam that modifies its electrochemistry greatly. For example, it is possible for sodium cations to be reduced at a mercury cathode to form sodium amalgam, while at an inert electrode (such as platinum) the sodium cations are not reduced. Instead, water is reduced to hydrogen. A detergent or fine solid can be used to stabilize an emulsion, or third phase.
Measures of effectiveness
Distribution ratio
In solvent extraction, a distribution ratio (D) is often quoted as a measure of how well-extracted a species is. The distribution ratio is a measure of the total concentration of a solute in the organic phase divided by its concentration in the aqueous phase. The partition or distribution coefficient (Kd) is the ration of solute concentration in each layer upon reaching equilibrium. This distinction between D and Kd is important. The partition coefficient is a thermodynamic equilibrium constant and has a fixed value for the solute’s partitioning between the two phases. The distribution ratio’s value, however, changes with solution conditions if the relative amounts of A and B change. If we know the solute’s equilibrium reactions within each phase and between the two phases, we can derive an algebraic relationship between Kd and D. The partition coefficient and the distribution ratio are identical if the solute has only one chemical form in each phase; however, if the solute exists in more than one chemical form in either phase, then Kd and D usually have different values. Depending on the system, the distribution ratio can be a function of temperature, the concentration of chemical species in the system, and a large number of other parameters. Note that D is related to the Gibbs Free Energy (ΔG) of the extraction process.
In solvent extraction, two immiscible liquids are shaken together. The more polar solutes dissolve preferentially in the more polar solvent, and the less polar solutes in the less polar solvent. In this experiment, the nonpolar halogens preferentially dissolve in the non-polar mineral oil.
Separation factors
The separation factor is one distribution ratio divided by another; it is a measure of the ability of the system to separate two solutes. For instance, if the distribution ratio for nickel (DNi) is 10 and the distribution ratio for silver (DAg) is 100, then the silver/nickel separation factor (SFAg/Ni) is equal to DAg/DNi = SFAg/Ni = 10.
Measures of success
Success of liquid–liquid extraction is measured through separation factors and decontamination factors. The best way to understand the success of an extraction column is through the liquid–liquid equilibrium (LLE) data set. The data set can then be converted into a curve to determine the steady state partitioning behavior of the solute between the two phases. The y-axis is the concentration of solute in the extract (solvent) phase, and the x-axis is the concentration of the solute in the raffinate phase. From here, one can determine steps for optimization of the process.
Techniques
Batch wise single stage extractions
This is commonly used on the small scale in chemical labs. It is normal to use a separating funnel. Processes include DLLME and direct organic extraction. After equilibration, the extract phase containing the desired solute is separated out for further processing.
Dispersive liquid–liquid microextraction (DLLME)
A process used to extract small amounts of organic compounds from water samples. This process is done by injecting small amounts of an appropriate extraction solvent (C2Cl4) and a disperser solvent (acetone) into the aqueous solution. The resulting solution is then centrifuged to separate the organic and aqueous layers. This process is useful in extraction organic compounds such as organochloride and organophsophorus pesticides, as well as substituted benzene compounds from water samples.
Direct organic extraction
By mixing partially organic soluble samples in organic solvent (toluene, benzene, xylene), the organic soluble compounds will dissolve into the solvent and can be separated using a separatory funnel. This process is valuable in the extraction of proteins and specifically phosphoprotein and phosphopeptide phosphatases.
Another example of this application is extracting anisole from a mixture of water and 5% acetic acid using ether, then the anisole will enter the organic phase. The two phases would then be separated. The acetic acid can then be scrubbed (removed) from the organic phase by shaking the organic extract with sodium bicarbonate. The acetic acid reacts with the sodium bicarbonate to form sodium acetate, carbon dioxide, and water.
Caffeine can also be extracted from coffee beans and tea leaves using a direct organic extraction. The beans or leaves can be soaked in ethyl acetate which favorably dissolves the caffeine, leaving a majority of the coffee or tea flavor remaining in the initial sample.
Multistage countercurrent continuous processes
These are commonly used in industry for the processing of metals such as the lanthanides; because the separation factors between the lanthanides are so small many extraction stages are needed. In the multistage processes, the aqueous raffinate from one extraction unit is fed to the next unit as the aqueous feed, while the organic phase is moved in the opposite direction. Hence, in this way, even if the separation between two metals in each stage is small, the overall system can have a higher decontamination factor.
Multistage countercurrent arrays have been used for the separation of lanthanides. For the design of a good process, the distribution ratio should be not too high (>100) or too low (<0.1) in the extraction portion of the process. It is often the case that the process will have a section for scrubbing unwanted metals from the organic phase, and finally a stripping section to obtain the metal back from the organic phase.
Mixer–settlers
Battery of mixer-settlers counter currently interconnected. Each mixer-settler unit provides a single stage of extraction. A mixer settler consists of a first stage that mixes the phases together followed by a quiescent settling stage that allows the phases to separate by gravity.
A novel settling device, Sudhin BioSettler, can separate an oil-water emulsion continuously at a much faster rate than simple gravity settlers. In this photo, an oil-water emulsion, stirred by an impeller in an external reservoir and pumped continuously into the two bottom side ports of BioSettler, is separated very quickly into a clear organic (mineral oil) layer exiting via the top of BioSettler and an aqueous (coloured with a red food dye) layer being pumped out continuously from the bottom of BioSettler.
In the multistage countercurrent process, multiple mixer settlers are installed with mixing and settling chambers located at alternating ends for each stage (since the outlet of the settling sections feed the inlets of the adjacent stage's mixing sections). Mixer-settlers are used when a process requires longer residence times and when the solutions are easily separated by gravity. They require a large facility footprint, but do not require much headspace, and need limited remote maintenance capability for occasional replacement of mixing motors. (Colven, 1956; Davidson, 1957)
Centrifugal extractors
Centrifugal extractors mix and separate in one unit. Two liquids will be intensively mixed between the spinning rotor and the stationary housing at speeds up to 6000 RPM. This develops great surfaces for an ideal mass transfer from the aqueous phase into the organic phase. At 200–2000 g, both phases will be separated again. Centrifugal extractors minimize the solvent in the process, optimize the product load in the solvent and extract the aqueous phase completely. Counter current and cross current extractions are easily established.
Extraction without chemical change
Some solutes such as noble gases can be extracted from one phase to another without the need for a chemical reaction (see absorption). This is the simplest type of solvent extraction. When a solvent is extracted, two immiscible liquids are shaken together. The more polar solutes dissolve preferentially in the more polar solvent, and the less polar solutes in the less polar solvent. Some solutes that do not at first sight appear to undergo a reaction during the extraction process do not have distribution ratio that is independent of concentration. A classic example is the extraction of carboxylic acids (HA) into nonpolar media such as benzene. Here, it is often the case that the carboxylic acid will form a dimer in the organic layer so the distribution ratio will change as a function of the acid concentration (measured in either phase).
For this case, the extraction constant k is described by k = [HAorganic]2/[HAaqueous]
Solvation mechanism
Using solvent extraction it is possible to extract uranium, plutonium, thorium and many rare earth elements from acid solutions in a selective way by using the right choice of organic extracting solvent and diluent. One solvent used for this purpose is the organophosphate tributyl phosphate (TBP). The PUREX process that is commonly used in nuclear reprocessing uses a mixture of tri-n-butyl phosphate and an inert hydrocarbon (kerosene), the uranium(VI) are extracted from strong nitric acid and are back-extracted (stripped) using weak nitric acid. An organic soluble uranium complex [UO2(TBP)2(NO3)2] is formed, then the organic layer bearing the uranium is brought into contact with a dilute nitric acid solution; the equilibrium is shifted away from the organic soluble uranium complex and towards the free TBP and uranyl nitrate in dilute nitric acid. The plutonium(IV) forms a similar complex to the uranium(VI), but it is possible to strip the plutonium in more than one way; a reducing agent that converts the plutonium to the trivalent oxidation state can be added. This oxidation state does not form a stable complex with TBP and nitrate unless the nitrate concentration is very high (circa 10 mol/L nitrate is required in the aqueous phase). Another method is to simply use dilute nitric acid as a stripping agent for the plutonium. This PUREX chemistry is a classic example of a solvation extraction. In this case, DU = k [TBP]2[NO3−]2.
Ion exchange mechanism
Another extraction mechanism is known as the ion exchange mechanism. Here, when an ion is transferred from the aqueous phase to the organic phase, another ion is transferred in the other direction to maintain the charge balance. This additional ion is often a hydrogen ion; for ion exchange mechanisms, the distribution ratio is often a function of pH. An example of an ion exchange extraction would be the extraction of americium by a combination of terpyridine and a carboxylic acid in tert-butyl benzene. In this case
DAm = k [terpyridine]1[carboxylic acid]3[H+]−3
Another example is the extraction of zinc, cadmium, or lead by a dialkyl phosphinic acid (R2PO2H) into a nonpolar diluent such as an alkane. A non-polar diluent favours the formation of uncharged non-polar metal complexes.
Some extraction systems are able to extract metals by both the solvation and ion exchange mechanisms; an example of such a system is the americium (and lanthanide) extraction from nitric acid by a combination of 6,6'-bis-(5,6-dipentyl-1,2,4-triazin-3-yl)-2,2'-bipyridine and 2-bromohexanoic acid in tert-butyl benzene. At both high- and low-nitric acid concentrations, the metal distribution ratio is higher than it is for an intermediate nitric acid concentration.
Ion pair extraction
It is possible by careful choice of counterion to extract a metal. For instance, if the nitrate concentration is high, it is possible to extract americium as an anionic nitrate complex if the mixture contains a lipophilic quaternary ammonium salt.
An example that is more likely to be encountered by the 'average' chemist is the use of a phase transfer catalyst. This is a charged species that transfers another ion to the organic phase. The ion reacts and then forms another ion, which is then transferred back to the aqueous phase.
For instance, the 31.1 kJ mol−1 is required to transfer an acetate anion into nitrobenzene, while the energy required to transfer a chloride anion from an aqueous phase to nitrobenzene is 43.8 kJ mol−1. Hence, if the aqueous phase in a reaction is a solution of sodium acetate while the organic phase is a nitrobenzene solution of benzyl chloride, then, when a phase transfer catalyst, the acetate anions can be transferred from the aqueous layer where they react with the benzyl chloride to form benzyl acetate and a chloride anion. The chloride anion is then transferred to the aqueous phase. The transfer energies of the anions contribute to that given out by the reaction.
A 43.8 to 31.1 kJ mol−1 = 12.7 kJ mol−1 of additional energy is given out by the reaction when compared with energy if the reaction had been done in nitrobenzene using one equivalent weight of a tetraalkylammonium acetate.
Types of aqueous two-phase extractions
Polymer–polymer systems. In a Polymer–polymer system, both phases are generated by a dissolved polymer. The heavy phase will generally be a polysaccharide, and the light phase is generally Polyethylene glycol (PEG). Traditionally, the polysaccharide used is dextran. However, dextran is relatively expensive, and research has been exploring using less expensive polysaccharides to generate the heavy phase. If the target compound being separated is a protein or enzyme, it is possible to incorporate a ligand to the target into one of the polymer phases. This improves the target's affinity to that phase, and improves its ability to partition from one phase into the other. This, as well as the absence of solvents or other denaturing agents, makes polymer–polymer extractions an attractive option for purifying proteins. The two phases of a polymer–polymer system often have very similar densities, and very low surface tension between them. Because of this, demixing a polymer–polymer system is often much more difficult than demixing a solvent extraction. Methods to improve the demixing include centrifugation, and application of an electric field.
Polymer–salt systems. Aqueous two-phase systems can also be generated by generating the heavy phase with a concentrated salt solution. The polymer phase used is generally still PEG. Generally, a kosmotropic salt, such as Na3PO4 is used, however PEG–NaCl systems have been documented when the salt concentration is high enough. Since polymer–salt systems demix readily they are easier to use. However, at high salt concentrations, proteins generally either denature, or precipitate from solution. Thus, polymer–salt systems are not as useful for purifying proteins.
Ionic liquids systems. Ionic liquids are ionic compounds with low melting points. While they are not technically aqueous, recent research has experimented with using them in an extraction that does not use organic solvents.
DNA purification
The ability to purify DNA from a sample is important for many modern biotechnology processes. However, samples often contain nucleases that degrade the target DNA before it can be purified. It has been shown that DNA fragments will partition into the light phase of a polymer–salt separation system. If ligands known to bind and deactivate nucleases are incorporated into the polymer phase, the nucleases will then partition into the heavy phase and be deactivated. Thus, this polymer–salt system is a useful tool for purifying DNA from a sample while simultaneously protecting it from nucleases.
Food industry
The PEG–NaCl system has been shown to be effective at partitioning small molecules, such as peptides and nucleic acids. These compounds are often flavorants or odorants. The system could then be used by the food industry to isolate or eliminate particular flavors. Caffeine extraction used to be done using liquid–liquid extraction, specifically direct and indirect liquid–liquid extraction (Swiss Water Method), but has since moved towards super-critical CO2 as it is cheaper and can be done on a commercial scale.
Analytical chemistry
Often there are chemical species present or necessary at one stage of sample processing that will interfere with the analysis. For example, some air monitoring is performed by drawing air through a small glass tube filled with sorbent particles that have been coated with a chemical to stabilize or derivatize the analyte of interest. The coating may be of such a concentration or characteristics that it would damage the instrumentation or interfere with the analysis. If the sample can be extracted from the sorbent using a nonpolar solvent (such as toluene or carbon disulfide), and the coating is polar (such as HBr or phosphoric acid) the dissolved coating will partition into the aqueous phase. Clearly the reverse is true as well, using polar extraction solvent and a nonpolar solvent to partition a nonpolar interferent. A small aliquot of the organic phase (or in the latter case, polar phase) can then be injected into the instrument for analysis.
Purification of amines
Amines (analogously to ammonia) have a lone pair of electrons on the nitrogen atom that can form a relatively weak bond to a hydrogen atom. It is therefore the case that under acidic conditions amines are typically protonated, carrying a positive charge and under basic conditions they are typically deprotonated and neutral. Amines of sufficiently low molecular weight are rather polar and can form hydrogen bonds with water and therefore will readily dissolve in aqueous solutions. Deprotonated amines on the other hand, are neutral and have greasy, nonpolar organic substituents, and therefore have a higher affinity for nonpolar inorganic solvents. As such purification steps can be carried out where an aqueous solution of an amine is neutralized with a base such as sodium hydroxide, then shaken in a separatory funnel with a nonpolar solvent that is immiscible with water. The organic phase is then drained off. Subsequent processing can recover the amine by techniques such as recrystallization, evaporation or distillation; subsequent extraction back to a polar phase can be performed by adding HCl and shaking again in a separatory funnel (at which point the ammonium ion could be recovered by adding an insoluble counterion), or in either phase, reactions could be performed as part of a chemical synthesis.
Temperature swing solvent extraction
Temperature swing solvent extraction is an experimental technique for the desalination of drinking water. It has been used to remove up to 98.5% of the salt content in water, and is able to process hypersaline brines that cannot be desalinated using reverse osmosis.
Kinetics of extraction
It is important to investigate the rate at which the solute is transferred between the two phases, in some cases by an alteration of the contact time it is possible to alter the selectivity of the extraction. For instance, the extraction of palladium or nickel can be very slow because the rate of ligand exchange at these metal centers is much lower than the rates for iron or silver complexes.
Aqueous complexing agents
If a complexing agent is present in the aqueous phase then it can lower the distribution ratio. For instance, in the case of iodine being distributed between water and an inert organic solvent such as carbon tetrachloride then the presence of iodide in the aqueous phase can alter the extraction chemistry: instead of being a constant it becomes
= k[I2 (organic)]/[I2 (aq)][I− (aq)]
This is because the iodine reacts with the iodide to form I3−. The I3− anion is an example of a polyhalide anion that is quite common.
Industrial process design
In a typical scenario, an industrial process will use an extraction step in which solutes are transferred from the aqueous phase to the organic phase; this is often followed by a scrubbing stage in which unwanted solutes are removed from the organic phase, then a stripping stage in which the wanted solutes are removed from the organic phase. The organic phase may then be treated to make it ready for use again.
After use, the organic phase may be subjected to a cleaning step to remove any degradation products; for instance, in PUREX plants, the used organic phase is washed with sodium carbonate solution to remove any dibutyl hydrogen phosphate or butyl dihydrogen phosphate that might be present.
Liquid-liquid equilibrium calculations
In order to calculate the phase equilibrium, it is necessary to use a thermodynamic model such as NRTL, UNIQUAC, etc. The corresponding parameters of these models can be obtained from literature (e.g. Dechema Chemistry Data Series, Dortmund Data Bank, etc.) or by a correlation process of experimental data.
Equipment
While solvent extraction is often done on a small scale by synthetic lab chemists using a separatory funnel, Craig apparatus or membrane-based techniques, it is normally done on the industrial scale using machines that bring the two liquid phases into contact with each other. Such machines include centrifugal contactors, Thin Layer Extraction, spray columns, pulsed columns, and mixer-settlers.
Extraction of metals
The extraction methods for a range of metals include:
Cobalt
The extraction of cobalt from hydrochloric acid using Alamine 336 (tri-octyl/decyl amine) in meta-xylene. Cobalt can be extracted also using Ionquest 290 or Cyanex 272 {bis-(2,4,4-trimethylpentyl) phosphinic acid}.
Copper
Copper can be extracted using hydroxyoximes as extractants, a recent paper describes an extractant that has a good selectivity for copper over cobalt and nickel.
Neodymium
The rare earth element Neodymium is extracted by di(2-ethyl-hexyl)phosphoric acid into hexane by an ion exchange mechanism.
Nickel
Nickel can be extracted using di(2-ethyl-hexyl)phosphoric acid and tributyl phosphate in a hydrocarbon diluent (Shellsol).
Palladium and platinum
Dialkyl sulfides, tributyl phosphate and alkyl amines have been used for extracting palladium and platinum.
Polonium
Polonium is produced in reactors from natural 209Bi, bombarded with neutrons, creating 210Bi, which then decays to 210Po via beta-minus decay. The final purification is done pyrochemically followed by liquid-liquid extraction vs sodium hydroxide at .
Zinc and cadmium
Zinc and cadmium are both extracted by an ion exchange process, the N,N,N′,N′-tetrakis(2-pyridylmethyl)ethylenediamine (TPEN) acts as a masking agent for the zinc and an extractant for the cadmium. In the modified Zincex process, zinc is separated from most divalent ions by solvent extraction. D2EHPA (Di (2) ethyl hexyl phosphoric acid) is used for this. A zinc ion replaces the proton from two D2EHPA molecules. To strip the zinc from the D2EHPA, sulfuric acid is used, at a concentration of above 170g/L (typically 240-265g/L).
Lithium
Lithium extraction is more popular due to the high demand of lithium-ion batteries. TBP (Tri-butyl phosphate) and are mostly used to extract lithium from brine (with high Li/Mg ratio). Alternatively, Cyanex 272 was also used to extract lithium. The mechanism of lithium extraction was found differently from other metals, such as cobalt, due to the weak coordinating bonding between lithium ions and extractants.
| Physical sciences | Other separations | Chemistry |
2393990 | https://en.wikipedia.org/wiki/Wise%20Observatory | Wise Observatory | The Florence and George Wise Observatory (IAU code 097) is an astronomical observatory owned and operated by Tel Aviv University. It is located west of the town of Mitzpe Ramon in the Negev desert near the edge of the Ramon Crater, and it is the only professional astronomical observatory in Israel.
History
The observatory was founded in October 1971 as a collaboration between Tel Aviv University and the Smithsonian Institution, and named after the late Dr. George S. Wise, the first President of the Tel Aviv University. The observatory is a research laboratory of Tel Aviv University. It belongs to the Raymond and Beverly Sackler Faculty of Exact Sciences and it serves mainly staff and graduate students from the Department of Astronomy and Astrophysics of the School of Physics and Astronomy, and from the Department of Geophysics and Planetary Sciences. Traditionally, the Wise Observatory Director is appointed by Tel Aviv University's Dean of Exact Sciences from the senior academic staff of the Department of Astronomy and Astrophysics.
The directors of the Wise Observatory since its foundation were:
Uri Feldman (1971–1973)
Asher Gottesman (1973–1975)
Dror Sadeh (1975–1977)
Elia Leibowitz (1977–1980)
Hagai Netzer (1980–1983)
Elia Leibowitz (1983–1988)
Tsevi Mazeh (1988–1990)
Hagai Netzer (1990–1991)
Elia Leibowitz (1991–1998)
Dan Maoz (1998–2000)
Noah Brosch (2000–2006)
Tsevi Mazeh (2006 – February 2007)
Noah Brosch (February 2007 – 2010)
Tsevi Mazeh (2011–2012)
Dan Maoz (since 2012)
Site
The number of clear nights (zero cloudiness) at the Wise Observatory site is about 170 a year. The number of useful nights, with part of the night cloud-free, is about 240. The best season, when practically no clouds are observed, is June to August, while the highest chance for clouds are in the period January to April. Winds are usually moderate, mainly from North-East and North. Storm wind velocities (greater than ) occur, but rarely. The wind speed tends to decrease during the night. Temperature gradients are small and fairly moderate. The average relative humidity is quite high, with a tendency to decline during the night from April to August.
The average seeing is about 2-3 seconds of arc. A few good nights have seeing of 1" or less, while few show seeing larger than 5".
An important advantage of the Wise Observatory at its location of ~35°E in the Northern Hemisphere is the possibility of cooperating with observatories at other longitudes for time-series studies. Such projects involve searches for stellar oscillations within the Whole Earth Telescope project, monitoring gravitational microlensing events, combined ground and space observing campaigns, etc.
Research highlights and discoveries
A project to monitor photometrically and spectroscopically Active Galactic Nuclei (AGNs) is still running, following about 30 years of data collection. Other major projects include searches for supernovae and extrasolar planets (transiting or lensing), and investigations of star formation processes in galaxies through wide and narrow-band filter imaging. Lately, some emphasis is put on studies of Near Earth Objects (NEOs), with the research focus being the rotational properties of NEOs and of other asteroids through the investigation of their light curves.
As of 2016, the Wise Observatory is credited by the Minor Planet Center with the discovery of 17 numbered minor planets during 1999–2007. Moreover, another 8 minor planets were discovered at the Wise Observatory, but are now credited to the individual astronomers such as David Polishook (see adjunct table and footnotes).
Equipment
The observatory operates a -diameter Boller and Chivens telescope, which is a wide-field Ritchey-Chrétien reflector mounted on a rigid, off-axis equatorial mount. This telescope was originally a twin of the Las Campanas 1 m Swope telescope, which was described by Bowen and Vaughan (1973), though the two instruments diverged somewhat during the years. It also has two CCD cameras, a two-star "Nather-type" photometer, a "Faint-object spectrograph-camera" (FOSC), and an older Boller and Chivens spectrograph. The photoelectric photometer and the Boller and Chivens spectrograph have not been in use for more than a decade.
A dioptric focal reducer (Maala) was used at f/7 to project a field of view almost one-degree wide on one of the CCDs (a SITe 2048x4096 pixel array) at the cost of slightly larger than optimal PSF sampling and some edge-of-field distortions. However, this instrument never produced satisfactory images and its use was discontinued.
A new CCD camera entered regular use in 2006: it is a Princeton Instruments Versarray with 1340×1300 pixels each 20 μm wide, with a peak quantum efficiency of 95% and good response in the blue part of the spectrum. Another camera was operated from the end of 2007 to about 2014; this is a CCD mosaic covering a one-degree non-contiguous field of view at f/7 in a single exposure (the LAIWO (Large Array Imager of the Wise Observatory) camera). This camera is composed of four 4096x4096 pixel non-butted Fairchild CCDs that are thick and front-illuminated, thus have a response peaking in the red with approximately 42% quantum efficiency. A smaller CCD with very high quantum efficiency and fast readout, centered between the four large CCDs, is used for guiding and fast photometry of selected objects. LAIWO was a cooperative endeavour of the Wise Observatory (PI: T. Mazeh) with the Max Planck Institute for Astronomy Heidelberg (PI: T. Henning).
A prime-focus computer-controlled telescope was added to the Wise Observatory in 2005 mainly for minor planet CCD photometry purposes and funded by the Israel Space Agency as part of a National Knowledge Center on Near Earth Objects. This is a Centurion-18 (C18) that has been extensively modified by the observatory staff in a continuous effort to transform it into a robotic telescope. The telescope was originally equipped with a thermoelectrically cooled SBIG ST-10XME CCD camera with 2184x1472 pixels each 6.8 micrometres wide, each subtending slightly more than one arcsec at the telescope prime focus. Since early-2009 this CCD was replaced by an SBIG STL-6303 CCD with 2048x3072 pixels, each 9 micrometers wide. The telescope and its camera, including the telescope dome, can be remotely operated.
A 70 cm (28-inch) prime-focus telescope, essentially the "big brother" of the C18 and called the Jay Baum Rich Telescope (JBRT), was added in 2013. This telescope has been commissioned and is in routine robotic operation.
A wide-field telescope has been installed in 2016 and is being commissioned. This telescope is a node of the Korean OWL-Net (Optical Wide-field patroL Network) that acquires and maintains orbital information of LEO satellites by purely optical means. OWL-Net is part of and is operated by the Korea Astronomy and Space Science Institute (KASI).
Observing time
Observations at the Wise Observatory are allocated on a semestrial basis for the periods from the beginning of April to the end of September (first semester) and from the beginning of October to the end of March the following year (second semester). The allocation is competitive and is based on the scientific merit of each proposal. The observing time is, in principle, open to qualified observers from all over the world. Over the years, most of the observing time during a given period has been allocated to one or two large, long-term, projects carried out by Tel Aviv faculty and graduate students.
| Technology | Ground-based observatories | null |
2394288 | https://en.wikipedia.org/wiki/Gynoecium | Gynoecium | Gynoecium (; ; : gynoecia) is most commonly used as a collective term for the parts of a flower that produce ovules and ultimately develop into the fruit and seeds. The gynoecium is the innermost whorl of a flower; it consists of (one or more) pistils and is typically surrounded by the pollen-producing reproductive organs, the stamens, collectively called the androecium. The gynoecium is often referred to as the "female" portion of the flower, although rather than directly producing female gametes (i.e. egg cells), the gynoecium produces megaspores, each of which develops into a female gametophyte which then produces egg cells.
The term gynoecium is also used by botanists to refer to a cluster of archegonia and any associated modified leaves or stems present on a gametophyte shoot in mosses, liverworts, and hornworts. The corresponding terms for the male parts of those plants are clusters of antheridia within the androecium. Flowers that bear a gynoecium but no stamens are called pistillate or carpellate. Flowers lacking a gynoecium are called staminate.
The gynoecium is often referred to as female because it gives rise to female (egg-producing) gametophytes; however, strictly speaking sporophytes do not have a sex, only gametophytes do. Gynoecium development and arrangement is important in systematic research and identification of angiosperms, but can be the most challenging of the floral parts to interpret.
Introduction
Unlike (most) animals, plants grow new organs after embryogenesis, including new roots, leaves, and flowers. In the flowering plants, the gynoecium develops in the central region of the flower as a carpel or in groups of fused carpels. After fertilization, the gynoecium develops into a fruit that provides protection and nutrition for the developing seeds, and often aids in their dispersal. The gynoecium has several specialized tissues. The tissues of the gynoecium develop from genetic and hormonal interactions along three-major axes. These tissue arise from meristems that produce cells that differentiate into the different tissues that produce the parts of the gynoecium including the pistil, carpels, ovary, and ovules; the carpel margin meristem (arising from the carpel primordium) produces the ovules, ovary septum, and the transmitting track, and plays a role in fusing the apical margins of carpels.
Pistil
The gynoecium may consist of one or more separate pistils. A pistil typically consists of an expanded basal portion called an ovary, an elongated section called a style and an apical structure called a stigma that receives pollen.
The ovary (from Latin ovum, meaning egg) is the enlarged basal portion which contains placentas, ridges of tissue bearing one or more ovules (integumented megasporangia). The placentas and/or ovule(s) may be born on the gynoecial appendages or less frequently on the floral apex. The chamber in which the ovules develop is called a locule (or sometimes cell).
The style (from Ancient Greek στῦλος, stylos, meaning a pillar) is a pillar-like stalk through which pollen tubes grow to reach the ovary. Some flowers, such as those of Tulipa, do not have a distinct style, and the stigma sits directly on the ovary. The style is a hollow tube in some plants, such as lilies, or has transmitting tissue through which the pollen tubes grow.
The stigma (from Ancient Greek , stigma, meaning mark or puncture) is usually found at the tip of the style, the portion of the carpel(s) that receives pollen (male gametophytes). It is commonly sticky or feathery to capture pollen.
The word "pistil" comes from Latin pistillum meaning pestle. A sterile pistil in a male flower is referred to as a pistillode.
Carpels
The pistils of a flower are considered to be composed of one or more carpels. A carpel is the female reproductive part of the flower—usually composed of the style, and stigma (sometimes having its individual ovary, and sometimes connecting to a shared basal ovary) —and usually interpreted as modified leaves that bear structures called ovules, inside which egg cells ultimately form. A pistil may consist of one carpel (with its ovary, style and stigma); or it may comprise several carpels joined together to form a single ovary, the whole unit called a pistil. The gynoecium may present as one or more uni-carpellate pistils or as one multi-carpellate pistil. (The number of carpels is denoted by terms such as tricarpellate (three carpels).)
Carpels are thought to be phylogenetically derived from ovule-bearing leaves or leaf homologues (megasporophylls), which evolved to form a closed structure containing the ovules. This structure is typically rolled and fused along the margin.
Although many flowers satisfy the above definition of a carpel, there are also flowers that do not have carpels because in these flowers the ovule(s), although enclosed, are borne directly on the floral apex. Therefore, the carpel has been redefined as an appendage that encloses ovule(s) and may or may not bear them. However, the most unobjectionable definition of the carpel is simply that of an appendage that encloses an ovule or ovules.
Types
If a gynoecium has a single carpel, it is called monocarpous. If a gynoecium has multiple, distinct (free, unfused) carpels, it is apocarpous. If a gynoecium has multiple carpels "fused" into a single structure, it is syncarpous. A syncarpous gynoecium can sometimes appear very much like a monocarpous gynoecium.
The degree of connation ("fusion") in a syncarpous gynoecium can vary. The carpels may be "fused" only at their bases, but retain separate styles and stigmas. The carpels may be "fused" entirely, except for retaining separate stigmas. Sometimes (e.g., Apocynaceae) carpels are fused by their styles or stigmas but possess distinct ovaries. In a syncarpous gynoecium, the "fused" ovaries of the constituent carpels may be referred to collectively as a single compound ovary. It can be a challenge to determine how many carpels fused to form a syncarpous gynoecium. If the styles and stigmas are distinct, they can usually be counted to determine the number of carpels. Within the compound ovary, the carpels may have distinct locules divided by walls called septa. If a syncarpous gynoecium has a single style and stigma and a single locule in the ovary, it may be necessary to examine how the ovules are attached. Each carpel will usually have a distinct line of placentation where the ovules are attached.
Pistil
Pistils begin as small primordia on a floral apical meristem, forming later than, and closer to the (floral) apex than sepal, petal and stamen primordia. Morphological and molecular studies of pistil ontogeny reveal that carpels are most likely homologous to leaves.
A carpel has a similar function to a megasporophyll, but typically includes a stigma, and is fused, with ovules enclosed in the enlarged lower portion, the ovary.
In some basal angiosperm lineages, Degeneriaceae and Winteraceae, a carpel begins as a shallow cup where the ovules develop with laminar placentation, on the upper surface of the carpel. The carpel eventually forms a folded, leaf-like structure, not fully sealed at its margins. No style exists, but a broad stigmatic crest along the margin allows pollen tubes access along the surface and between hairs at the margins.
Two kinds of fusion have been distinguished: postgenital fusion that can be observed during the development of flowers, and congenital fusion that cannot be observed i.e., fusions that occurred during phylogeny. But it is very difficult to distinguish fusion and non-fusion processes in the evolution of flowering plants. Some processes that have been considered congenital (phylogenetic) fusions appear to be non-fusion processes such as, for example, the de novo formation of intercalary growth in a ring zone at or below the base of primordia. Therefore, "it is now increasingly acknowledged that the term 'fusion,' as applied to phylogeny (as in 'congenital fusion') is ill-advised."
Gynoecium position
Basal angiosperm groups tend to have carpels arranged spirally around a conical or dome-shaped receptacle. In later lineages, carpels tend to be in whorls.
The relationship of the other flower parts to the gynoecium can be an important systematic and taxonomic character. In some flowers, the stamens, petals, and sepals are often said to be "fused" into a "floral tube" or hypanthium. However, as Leins & Erbar (2010) pointed out, "the classical view that the wall of the inferior ovary results from the "congenital" fusion of dorsal carpel flanks and the floral axis does not correspond to the ontogenetic processes that can actually be observed. All that can be seen is an intercalary growth in a broad circular zone that changes the shape of the floral axis (receptacle)." And what happened during evolution is not a phylogenetic fusion but the formation of a unitary intercalary meristem. Evolutionary developmental biology investigates such developmental processes that arise or change during evolution.
If the hypanthium is absent, the flower is hypogynous, and the stamens, petals, and sepals are all attached to the receptacle below the gynoecium. Hypogynous flowers are often referred to as having a superior ovary. This is the typical arrangement in most flowers.
If the hypanthium is present up to the base of the style(s), the flower is epigynous. In an epigynous flower, the stamens, petals, and sepals are attached to the hypanthium at the top of the ovary or, occasionally, the hypanthium may extend beyond the top of the ovary. Epigynous flowers are often referred to as having an inferior ovary. Plant families with epigynous flowers include orchids, asters, and evening primroses.
Between these two extremes are perigynous flowers, in which a hypanthium is present, but is either free from the gynoecium (in which case it may appear to be a cup or tube surrounding the gynoecium) or connected partly to the gynoecium (with the stamens, petals, and sepals attached to the hypanthium part of the way up the ovary). Perigynous flowers are often referred to as having a half-inferior ovary (or, sometimes, partially inferior or half-superior). This arrangement is particularly frequent in the rose family and saxifrages.
Occasionally, the gynoecium is born on a stalk, called the gynophore, as in Isomeris arborea.
Placentation
Within the ovary, each ovule is born by a placenta or arises as a continuation of the floral apex. The placentas often occur in distinct lines called lines of placentation. In monocarpous or apocarpous gynoecia, there is typically a single line of placentation in each ovary. In syncarpous gynoecia, the lines of placentation can be regularly spaced along the wall of the ovary (parietal placentation), or near the center of the ovary. In the latter case, separate terms are used depending on whether or not the ovary is divided into separate locules. If the ovary is divided, with the ovules born on a line of placentation at the inner angle of each locule, this is axile placentation. An ovary with free central placentation, on the other hand, consists of a single compartment without septae and the ovules are attached to a central column that arises directly from the floral apex (axis). In some cases a single ovule is attached to the bottom or top of the locule (basal or apical placentation, respectively).
The ovule
In flowering plants, the ovule (from Latin ovulum meaning small egg) is a complex structure born inside ovaries. The ovule initially consists of a stalked, integumented megasporangium (also called the nucellus). Typically, one cell in the megasporangium undergoes meiosis resulting in one to four megaspores. These develop into a megagametophyte (often called the embryo sac) within the ovule. The megagametophyte typically develops a small number of cells, including two special cells, an egg cell and a binucleate central cell, which are the gametes involved in double fertilization. The central cell, once fertilized by a sperm cell from the pollen becomes the first cell of the endosperm, and the egg cell once fertilized become the zygote that develops into the embryo. The gap in the integuments through which the pollen tube enters to deliver sperm to the egg is called the micropyle. The stalk attaching the ovule to the placenta is called the funiculus.
Role of the stigma and style
Stigmas can vary from long and slender to globe-shaped to feathery. The stigma is the receptive tip of the carpel(s), which receives pollen at pollination and on which the pollen grain germinates. The stigma is adapted to catch and trap pollen, either by combining pollen of visiting insects or by various hairs, flaps, or sculpturings.
The style and stigma of the flower are involved in most types of self incompatibility reactions. Self-incompatibility, if present, prevents fertilization by pollen from the same plant or from genetically similar plants, and ensures outcrossing.
The primitive development of carpels, as seen in such groups of plants as Tasmannia and Degeneria, lack styles and the stigmatic surface is produced along the carpels margins.
| Biology and health sciences | Plant anatomy and morphology: General | Biology |
2394627 | https://en.wikipedia.org/wiki/Cetyl%20alcohol | Cetyl alcohol | Cetyl alcohol , also known as hexadecan-1-ol and palmityl alcohol, is a C-16 fatty alcohol with the formula CH3(CH2)15OH. At room temperature, cetyl alcohol takes the form of a waxy white solid or flakes. The name cetyl refers to whale oil (cetacea oil, from , from ) from which it was first isolated.
Preparation
Cetyl alcohol was discovered in 1817 by the French chemist Michel Chevreul when he heated spermaceti, a waxy substance obtained from sperm whale oil, with caustic potash (potassium hydroxide). Flakes of cetyl alcohol were left behind on cooling. Modern production is based around the chemical reduction of ethyl palmitate.
Occurrence and uses
The ether chimyl alcohol, derived from cetyl alcohol and glycerol, is a component of some lipid membranes.
Cetyl alcohol is used in the cosmetic industry as an opacifier in shampoos, or as an emollient, emulsifier or thickening agent in the manufacture of skin creams and lotions. It is also employed as a lubricant for nuts and bolts, and is the active ingredient in some "liquid pool covers" (forming a non-volatile surface layer to reduce water evaporation, related latent vaporization heat loss, and thus to retain heat in the pool). Moreover, it can also be used as a non-ionic co-surfactant in emulsion applications.
Side effects
People who have eczema can be sensitive to cetyl alcohol, though this may be due to impurities rather than cetyl alcohol itself. However, cetyl alcohol is sometimes included in medications used for the treatment of eczema.
Related compounds
Palmitate
Palmitic acid
| Physical sciences | Alcohols | Chemistry |
2395127 | https://en.wikipedia.org/wiki/Oogonium | Oogonium | An oogonium (: oogonia) is a small diploid cell which, upon maturation, forms a primordial follicle in a female fetus or the female (haploid or diploid) gametangium of certain thallophytes.
In the mammalian fetus
Oogonia are formed in large numbers by mitosis early in fetal development from primordial germ cells. In humans they start to develop between weeks 4 and 8 and are present in the fetus between weeks 5 and 30.
Structure
Normal oogonia in human ovaries are spherical or ovoid in shape and are found amongst neighboring somatic cells and oocytes at different phases of development. Oogonia can be distinguished from neighboring somatic cells, under an electron microscope, by observing their nuclei. Oogonial nuclei contain randomly dispersed fibrillar and granular material whereas the somatic cells have a more condensed nucleus that creates a darker outline under the microscope. Oogonial nuclei also contain dense prominent nucleoli. The chromosomal material in the nucleus of mitotically dividing oogonia shows as a dense mass surrounded by vesicles or double membranes.
The cytoplasm of oogonia appears similar to that of the surrounding somatic cells and similarly contains large round mitochondria with lateral cristae. The endoplasmic reticulum (E.R.) of oogonia, however, is very underdeveloped and is made up of several small vesicles. Some of these small vesicles contain cisternae with ribosomes and are found located near the golgi apparatus.
Oogonia that are undergoing degeneration appear slightly different under the electron microscope. In these oogonia, the chromosomes clump together into an indistinguishable mass within the nucleus and the mitochondria and E.R. appear to be swollen and disrupted. Degenerating oogonia are usually found partially or wholly engulfed in neighboring somatic cells, identifying phagocytosis as the mode of elimination.
Development and differentiation
In the blastocyst of the mammalian embryo, primordial germ cells arise from proximal epiblasts under the influence of extra-embryonic signals. These germ cells then travel, via amoeboid movement, to the genital ridge and eventually into the undifferentiated gonads of the fetus. During the 4th or 5th week of development, the gonads begin to differentiate. In the absence of the Y chromosome, the gonads will differentiate into ovaries. As the ovaries differentiate, ingrowths called cortical cords develop. This is where the primordial germ cells collect.
During the 6th to 8th week of female (XX) embryonic development, the primordial germ cells grow and begin to differentiate into oogonia. Oogonia proliferate via mitosis during the 9th to 22nd week of embryonic development. There can be up to 600,000 oogonia by the 8th week of development and up to 7,000,000 by the 5th month.
Eventually, the oogonia will either degenerate or further differentiate into primary oocytes through asymmetric division. Asymmetric division is a process of mitosis in which one oogonium divides unequally to produce one daughter cell that will eventually become an oocyte through the process of oogenesis, and one daughter cell that is an identical oogonium to the parent cell. This occurs during the 15th week to the 7th month of embryonic development. Most oogonia have either degenerated or differentiated into primary oocytes by birth.
Primary oocytes will undergo oogenesis in which they enter meiosis. However, primary oocytes are arrested in prophase 1 of the first meiosis and remain in that arrested stage until puberty begins in the female adult. This is in contrast to male primordial germ cells which are arrested in the spermatogonial stage at birth and do not enter into spermatogenesis and meiosis to produce primary spermatocytes until puberty in the adult male.
Regulation of oogonia differentiation and entry into oogenesis
The regulation and differentiation of germ cells into primary gametocytes ultimately depends on the sex of the embryo and the differentiation of the gonads. In female mice, the protein RSPO1 is responsible for the differentiation of female (XX) gonads into ovaries. RSPO1 activates the β-catenin signaling pathway by up-regulating Wnt4 which is an essential step in ovary differentiation. Research has shown that ovaries lacking Rspo1 or Wnt4 will exhibit sex reversal of the gonads, the formation of ovotestes and the differentiation of somatic sertoli cells, which aid in the development of sperm.
After female (XX) germ cells collect in the undifferentiated gonads, the up-regulation of Stra8 is required for germ cell differentiation into an oogonium and eventually enter meiosis. One major factor that contributes to the up-regulation of Stra8, is the initiation of the β-Catenin signaling pathway via RSPO1, which is also responsible for ovary differentiation. Since RSPO1 is produced in somatic cells, this protein acts on germ cells in a paracrine mode. Rspo1, however, is not the only factor in Stra8 regulation. Many other factors are under scrutiny and this process is still being evaluated.
Oogonial stem cells
It is theorized that oogonia either degenerate or differentiate into primary oocytes which enter oogenesis and are halted in prophase I of the first meiosis post partum. Therefore, it is believed that adult mammalian females lack a population of germ cells that can renew or regenerate, and instead have a large population of primary oocytes that are arrested in the first meiosis until puberty. At puberty, one primary oocyte will continue meiosis each menstrual cycle. Because there is an absence of regenerating germ cells and oogonia in the human, the number of primary oocytes dwindles after each menstrual cycle until menopause, when the female no longer has a population of primary oocytes.
Recent research, however, has identified that renewable oogonia may be present in the lining of the female ovaries of humans, primates and mice. It is thought that these germ cells might be necessary for the upkeep of the reproductive follicles and oocyte development, well into adulthood. It has also been discovered that some stem cells may migrate from the bone marrow to the ovaries as a source of extra-genial germ cells. These mitotically active germ cells found in mammalian adults were identified by tracking several markers that were common in oocytes. These potential renewable germ cells were identified as positive for these essential oocyte markers.
The discovery of these active germ cells and oogonia in the adult female could be very useful in the advancement of fertility research and treatment of infertility. Germ cells have been extracted, isolated and grown successfully in vitro. These germ cells have been used to restore fertility in mice by promoting follicle generation and upkeep in previously infertile mice. There is also research being done on possible germ line regeneration in primates. Mitotically active human female germ cells could be very beneficial to a new method of embryonic stem cell development that involves a nuclear transfer into a zygote. Using these functional oogonia may help to create patient-specific stem cell lines using this method.
Controversy
There is a significant controversy regarding existence of mammalian oogonial stem cells. The controversy lies in negative data that has originated from many laboratories in the United States. Multiple approaches to verify the existence of oogonial stem cells have yielded negative results, and no research group in United States has been able to reproduce initial findings.
In certain thallophytes
In phycology and mycology, oogonium refers to a female gametangium if the union of the male (motile or non-motile) and the female gamete takes place within this structure.
In Oomycota and some other organisms, the female oogonia, and the male equivalent antheridia, are a result of sexual sporulation, i.e. the development of structures within which meiosis will occur. The haploid nuclei (gametes) are formed by meiosis within the antheridia and oogonia, and when fertilization occurs, a diploid oospore is produced which will eventually germinate into the diploid somatic stage of the thallophyte life cycle.
In many algae (e.g., Chara), the main plant is haploid; oogonia and antheridia form and produce haploid gametes. The only diploid part of the life cycle is the spore (fertilized egg cell), which undergoes meiosis to form haploid cells that develop into new plants. This is a haplontic life cycle (with zygotic meiosis).
Structure
The oogonia of certain Thallophyte species are usually round or ovoid, with contents are divided into several uninucleate oospheres. This is in contrast to the male antheridia which are elongate and contain several nuclei.
In heterothallic species, the oogonia and antheridia are located on hyphal branches of different thallophyte colonies. Oogonia of this species can only be fertilized by antheridia from another colony and ensures that self-fertilization is impossible. In contrast, homothallic species display the oogonia and antheridia on either the same hyphal branch or on separate hyphal branches but within the same colony.
Fertilization
In a common mode of fertilization found in certain species of Thallophytes, the antheridia will bind with the oogonia. The antheridia will then form fertilization tubes connecting the antheridial cytoplasm with each oosphere within the oogonia. A haploid nucleus (gamete) from the antheridium will then be transferred through the fertilization tube into the oosphere, and fuse with the oosphere's haploid nucleus forming a diploid oospore. The oospore is then ready to germinate and develop into an adult diploid somatic stage.
| Biology and health sciences | Plant reproduction | Biology |
2396176 | https://en.wikipedia.org/wiki/Mekosuchus | Mekosuchus | Mekosuchus is a genus of extinct Australasian mekosuchine crocodilian. Species of Mekosuchus were generally small-sized (less than long), terrestrial animals with short, blunt-snouted heads and strong limbs. Four species are currently recognized, M. inexpectatus, M. whitehunterensis, M. sanderi and M. kalpokasi, all known primarily from fragmentary remains.
Mekosuchus was a successful and widespread genus, with its earliest members being found during the Oligocene and Miocene in mainland Australia. These species coexisted with a wide variety of other mekosuchines, forming a highly diverse crocodilian fauna including terrestrial hunters, semi-aquatic ambush predators and long-snouted fish eaters. The anatomy of the neck vertebrae of M. whitehunterensis might indicate that it was quite well adapted to stripping flesh from carcasses, using blade-like teeth and violent side-to-side thrashing.
The younger two species were found on the Pacific islands of New Caledonia and Vanuatu respectively and represent some of the youngest known mekosuchines. Mekosuchus possibly died out approximately 3,000 years ago, during the Holocene, but some authors have also suggested that they may have survived until even more recently. Unlike the mainland species, M. inexpectatus is known to have had bulbous posterior teeth that may have been used to crack the shells of crustaceans and molluscs. Some researchers suggest that they were possibly nocturnal animals living in close association with rainforest streams. What caused their extinction is unclear. Although some researchers suggest a human cause, others point out that the potential overlap with human settlements is insufficiently understood and no direct signs of human involvement have been found.
History and naming
Fossils of Mekosuchus were initially recovered from various different sites across New Caledonia, with the first bone, a fragmentary quadratojugal, being collected from Kanumera Bay in 1981. Subsequent years yielded more material stemming from both the Isle of Pines and the Pindai Caves on the main island of Grande Terre. This material included various cranial and postcranial remains, ranging from the complete holotype dentary to skull fragments and isolated vertebrae. Such fossils were first reported by Eric Buffetaut in 1983 and properly described by him and Jean-Christophe Balouet four years later in 1987. Due to the strange anatomy of the material, they initially assumed the animal represented an early eusuchian (at the time considered a suborder of Crocodylia) and placed it in its own family, the Mekosuchidae. Ten years later, in 1997, a second species was described by Paul Willis from the Riversleigh World Heritage Area in Queensland. Named M. whitehunterensis, it was not only geographically separated from the type species but also distinctly older, dating to the Late Oligocene. This marked the first but not the last known instance of a Mekosuchus species being found on the Australian mainland, instead of on an island. The second instance came only four years later with the description of Mekosuchus sanderi, also named by Willis. The most recently described species is Mekosuchus kalpokasi, which was named in 2002 from fragmentary remains discovered at an archaeological site on Vanuatu. A 2003 expedition also yielded additional remains of M. inexpectatus, with additional fossils of M. whitehunterensis being found as well.
The generic name of Mekosuchus derives from the Drehu name for Grande Terre, Mek, in combination with the suffix -suchus meaning crocodile.
Species
M. inexpectatus
The first discovered (and possibly youngest) member of this genus is the type species, M. inexpectatus, from the Holocene of New Caledonia. Radiocarbon dating suggests that the M. inexpectatus fossils from the Isle of Pines date to roughly 3,750 years BP. The Pindai Caves material on the other hand appears to have been younger, with some fossils possibly dating to approximately 1,720 years BP according to Balouet and Buffetaut. While survival into human times may be supported by remains found at archaeological sites, the age of some material has been disputed, with some authors suggesting a Pleistocene rather than Holocene age. The species name "inexpectatus" was chosen to reflect the unexpected appearance of a crocodilian on the isolated island of Grande Terre. Over 300 bones have been collected from the Pindai Caves alone, but the majority remains undescribed.
M. kalpokasi
The second Holocene species is M. kalpokasi, which lived on the island of Éfaté of Vanuatu approximately 3,200 to 2,706 years BP. Unlike with M. inexpectatus, the age of M. kalpokasi has not been disputed, making it the youngest confirmed species but also the least well understood. The remains of this species are fragmentary, consisting only of a partial maxilla and the ends of a tibia and fibula. For this reason, the poor preservation of the same area in the type species and the two species geographic and temporal range, Holt and colleagues suggest that this species may be synonymous with M. inexpectatus. This species was named after the prime minister of Vanuatu at the time of its discovery, Donald Kalpokas, who was noted for his strong support of the archaeological excavations that yielded the fossils of this crocodilian.
M. sanderi
M. sanderi is one of two Mekosuchus species known from the mainland of Australia, and lived during the Early Miocene in what is now Queensland. It was named by Willis based on two maxillae and various skull fragments, all stemming from the productive Riversleigh World Heritage Area, specifically the Ringtail Site within Faunal Zone C. The species name refers to Martin Sander, who supported Willis while studying in Germany.
M. whitehunterensis
M. whitehunterensis is the oldest known species and lived during the late Oligocene and early Miocene in Queensland. While also known from various localities of the Riversleigh, remains of M. whitehunterensis are older than those of M. sandersi and specifically found in Faunal Zones A and B, which yielded the holotype maxilla as well as referred material including vertebrae and skull remains. Besides some more subtle differences, M. whitehunterensis is most readily distinguished from the type species by the presence of blade-like posterior teeth. The name derives from the White Hunter Site, the locality where the first remains of this species were found.
Description
The skull of Mekosuchus was brachycephalic or altirostral, meaning that it was notably short and raised rather than elonagted and flattened as seen in most extant crocodilian species. In this regard Mekosuchus has been compared to Trilophosuchus and the modern, only distantly related genus Osteolaemus, which includes the extant dwarf crocodiles. Other researchers have also drawn comparisons between this genus and various other terrestrial crocodylomorphs including notosuchians. Two reconstructions of the skull of Mekosuchus have been published, differing greatly from one another. Following the discovery of additional remains, Holt and colleagues reconstructed Mekosuchus inexpectatus in 2007 with a skull similar to that of modern dwarf crocodiles. In 2014 on the other hand, Scanlon produced a composite skeletal for the skull of M. whitehunterensis, reconstructing it with a much more gently sloping rostrum that differed greatly from the previous depiction of the genus.
The best known species is Mekosuchus inexpectatus, which was described as displaying a unique mix of basal and derived features of the skull. The palatine bones, which form part of the roof of the mouth, narrow towards the back. The choanae, which connect the nasal passages with the throat, are located further forward (near the palatine-pterygoid suture) than in modern crocodiles and resemble what is seen in some Late Cretaceous crocodilians like Albertochampsa and Thoracosaurus. The wings of the pterygoid bone are well developed towards the back of the skull and the quadratojugal lacks a spine, which is a feature shared by alligatoroids but not by crocodylids. The position of the postorbital bar also differs from modern crocodilians, as it isn't displaced inward or only to a very small degree. The external nares open towards the sides and front of the skull (anterolaterally) rather than facing upwards (dorsally) and the opening is not contacted by the nasal bone. The eye sockets were well-developed and large and, unique among crocodilians, are in part formed by the maxilla, preventing the jugal and lacrimal bone from contacting each other. This unique contribution of the maxilla to the orbital rim is among the diagnostic features of this genus.
As in many crocodilians, the tooth row of Mekosuchus is placed in a distinct, wave-like manner also referred to as festooning. Festooning is usually the least pronounced in longirostrine forms like gharials, which have rather straight toothrows and much more prominent in short-snouted species. The maxilla displays some festooning in M. whitehunterensis and a much more extreme wave-form in M. kalpokasi. While festooning may be exaggerated in younger individuals, an analysis conducted on the material of M. kalpokasi has confirmed it to be an adult.
Other cranial features that can be used to differentiate the four species includes the extent of the palatal fenestrae. In M. sanderi and M. inexpectatus the front edge of the fenestrae extends until the 6th tooth of the maxilla, while in M. kalpokasi and M. whitehunterensis it extends only until the 7th. M. whitehunterensis further differs from all other species by possessing a longitudinal furrow beneath the eyes, while M. sanderi possesses a crest atop the squamosal bone. The extent of the shallow mandibular symphysis, the fused section at the front of the lower jaw, also differs between species. In M. inexpectatus the symphysis extends until the position of the 7th dentary tooth, while in M. whitehunterensis it ends at the 6th dentary tooth. This prevents the splenial from contributing to the symphysis, as it only extends forward to the level of the 7th dentary tooth across all species of the genus. The mandibular fenestra is strongly reduced, being almost closed in M. whitehunterensis, and the angular and surangular bones possess out-turned flanges, both of these are diagnostic for Mekosuchus.
Some postcranial remains are also known, primarily from M. inexpectatus and M. whitehunterensis. Between the two, the vertebrae of M. whitehunterensis are described in greater detail. They are procoelous and the neck (cervical) vertebrae specifically were noted to be shorter than those of the extant freshwater crocodile, even when accounting for the small size of Mekosuchus. This may indicate that at least M. inexpectatus had a shortened neck. The axis vertebra displays the typical sloping neural spine of crocodilians, but bears closer resemblance to alligatorids than to crocodylids. The following neural spines follow the overall pattern expected from a crocodile, though comparably taller than in other similarly sized animals. At the same time, the neural spines are not as inclined as in today's crocodiles, especially towards the front of the neck. This has been taken as evidence that, in spite of being small, Mekosuchus had well developed and strong epaxial neck musculature. It is possible that the neck anatomy of M. whitehunterensis represents a compromise between needed mobility and enlarged musculature. Similar neck vertebrae have been described for both Mekosuchus inexpectatus as well as the genera Trilophosuchus and Volia, indicating that this anatomy may have been more widespread among derived mekosuchines.
According to Willis, the humerus was similar in form to that of modern monitor lizards and Balouet & Buffetaut make mention of well developed insertions for the musculature. In a 2013 abstract it is mentioned that the tuber of the calcaneus, the heel, is robust and unusually short.
Various parts of the osteoderms, the bony armor, are known from across the different species and were specifically mentioned for M. inexpectatus and the Oligocene mainland species. The dorsal and tail osteoderms of the continental species are described as being highly modified, which may be related to biomechanics or simply a defensive adaptation.
Dentition
The dentition of the four known Mekosuchus species varies between the taxa both in shape, number and occlusion. For instance, the lower jaw of M. inexpectatus contained 13 teeth, whereas that of M. whitehunterensis contained 16. Upper jaws on the other hand can be compared between M. kalpokensis and M. sanderi, with the former possessing 12 maxillary teeth and the latter 13.
However, the differences in shape are more noticeable. The oldest species, M. whitehunterensis, was described as having smooth maxillary teeth that would display flattened sides towards the back of the jaw, making them blade-like. A similar condition can be observed in the younger mainland species, M. sanderi, in which the teeth become laterally compressed following the 5th tooth of the maxilla. The Holocene species meanwhile lack these blade-like teeth. Although only the tooth sockets are known from M. kalpokasi, these suggest that the teeth were circular to ovate in cross section, with no signs of the lateral compression seen in older forms. The teeth of M. inexpectatus are better known, but likewise fail to display the same condition as seen in the continental species. Rather than being blade-like, the posterior teeth of M. inexpectatus were bulbous molariforms, better suited for crushing than for slicing. Similar tribodont teeth are seen in many unrelated types of eusuchians, including Allognathosuchus, Bernissartia and modern dwarf crocodiles.
Similarly, the way the maxillary teeth occlude with one another also varies between these forms. This can be determined either by the form of the toothrow itself or through the presence of occlusal pits that the teeth could slide into when the jaw was closed. Generally, two states are known. Interfingering teeth as seen in modern members of Crocodylus and an overbite as seen in Alligator, however, some species of Mekosuchus also display an intermediate pattern, combining an overbite with some degree of interfingering. M. inexpectatus displays a full overbite in the maxillary toothrow and the same is the case for M. whitehunterensis. In case of the later, most maxillary teeth were simply too closely spaced to allow for interlocking dentition and towards the back of the skull, occlusal pits confirm that certain dentary teeth were positioned further inside (medially) relative to those of the upper jaw. M. sanderi and M. kalpokasi on the other hand feature a mix. In both of these species, the teeth towards the tip of the jaw and towards the back were arranged in an overbite, however, M. sanderi had an interlocking dentary tooth between the 7th and 8th teeth of the maxilla, while in M. kalpokasi the dentition interlocked between the 6th and 7th as well as the 7th and 8th maxillary teeth.
Size
Mekosuchus is among the smallest mekosuchines and is often referred to as a dwarf species in the same fashion as Trilophosuchus. While growth is a consistent feature in crocodilians throughout their lives, the rate at which they grow each year decreases as an individual approaches maturity. Subsequently, in dwarf species like Mekosuchus this growth rate begins to decrease early on, resulting in their small body size relative to other crocodilians. The fact that Mekosuchus specimens are mature or at least almost mature can be found in the anatomy of the vertebrae. According to Christopher Brochu, maturity in crocodilians can be determined by the fusion between the neural centra and the neural spine, which progresses from the last tail vertebra to the first neck vertebra. Based on this, the vertebrae of the mainland M. whitehunterensis could clearly be identified as having belonged to an almost mature individual, despite its small size. The most complete skull of this species measures only , which may result in a total body length of only . This puts M. whitehunterensis within the lower size range of today's dwarf crocodilians, Osteolaemus and Paleosuchus, both of which typically reach lengths of over when fully grown. Estimates for other members of the genus are generally less precise, but fall into the same overall size range. M. inexpectatus for instance has been estimated to have reached a length of approximately by Balouet, while Holt and colleagues estimate members of Mekosuchus to be around in length.
Phylogeny
When first describing Mekosuchus, Balouet and Buffetaut struggled to determine the relationship between it and modern crocodilians, noting how the taxon displayed a variety of basal and derived traits that did not align perfectly with any of the modern groups. They ultimately determined that Mekosuchus was a Eusuchian based on the choanae and the procoelous vertebrae, and placed it in the monotypic family Mekosuchidae, which they believed to have been the sister group to all three modern crocodilian families. Since then, research on Australasian crocodilians has placed a wide range of other taxa in the family, which is now referred to by the name Mekosuchinae. Although mekosuchines are still a poorly understood group whose internal and external relationships commonly shift, Mekosuchus is traditionally allied with other altirostral forms such as Trilophosuchus and Quinkana. Willis (1997) suggests a close link between Mekosuchus and Trilophosuchus, with Quinkana as their sister taxon, while Mead et al. (2002) place Mekosuchus, Quinkana and a then unnamed Volia in a large polytomy as sisters to Trilophosuchus within the clade Mekosuchini. A 2018 tip dating study by Lee & Yates using a combination of morphological, molecular (DNA sequencing), and stratigraphic (fossil age) data recovered broadly similar results, although the precise relations within Mekosuchini do differ. Here, Trilophosuchus was recovered as the closest relative of Quinkana, with Mekosuchus being the sister taxon to their grouping and Volia as the basalmost mekosuchinin.
The most recent analysis was performed by Ristevski et al. in 2023 and put a broader focus on not just Mekosuchines but Australasian crocodylifroms in general, which includes the extant crocodylids of Australia, Australian gavialoids as well as more basal taxa like those placed in Susisuchidae. Six out of eight analyses recovered Mekosuchinae as a monophyletic group similar to the results of Lee and Yates. These analyses recovered most mekosuchines within Mekosuchini, which in turn was split into two clades. On the one hand large, continental forms and on the other small and/or insular taxa. The latter clade somewhat resembles the previous relationships suggested for Mekosuchus, as it also contains Volia and Trilophosuchus. Notably however, "Baru" huberi was recovered as the basalmost member of this group, while Quinkana was placed in the large-bodied, continental clade. The remaining two trees deviated greatly from the traditional composition of Mekosuchinae, with Kambara and Australosuchus being recovered elsewhere in Crocodylia and Mekosuchinae also including the clade Orientalosuchina, small-bodied Cretaceous to Paleogene crocodilians from Asia. However, support for these trees is low as indicated by both phylogenetic results and morphological similarities, with many uniting characters being widespread among crocodilians. Regardless of the relationship between Mekosuchinae and Orientalosuchina, the closest relatives to Mekosuchus remain the same across the analyses, generally recovering the same small-bodied clade composed of "Baru" huberi, Volia, Trilophosuchus and Mekosuchus. Results similar to this were also recovered by Yates and Stein in their re-evaluation of Ultrastenos and "Baru" huberi.
Paleobiogeography
While fossil evidence shows that Mekosuchus originated on mainland Australia, little is known about how it dispersed throughout the South Pacific. Currently, three mekosuchines are known from the region, M. inexpectatus, M. kalpokasi and Volia. M. inexpectatus may have had the largest range in time among them, with estimates suggesting that it may have first appeared nearly 4,000 years ago. This species is known exclusively from New Caledonia, which makes it the closest geographically to mainland Australia. There is some overlap between the fauna of New Caledonia and that of Vanuatu, with the two islands sharing 12% of their native lizards. One factor possibly important to the similarities and differences among the islands of the region is the geology of the Inner and Outer Melanesian Arc. The former split from Australia during the Cretaceous, while the latter only formed during the Paleogene and Neogene. As mekosuchines first appeared during the Eocene, Mead and colleagues argue that continental drift and break up could not have played a part in their appearance in the South Pacific. Instead, it is considered more likely that the ancestors of the insular mekosuchines traveled short distances across the ocean to arrive on the islands of the Inner Melanesian Arc, before dispersing between the islands of the South Pacific from there. Although it is not known whether or not mekosuchines were tolerant to saltwater or had the same adaptations for marine dispersal as modern crocodiles (such as salt glands), it is possible that they could have actively swam between landmasses or drifted with the use of natural rafts. This process would have greatly profited from the lower sea levels present during the late Cenozoic, decreasing the distance between now isolated islands and in some instances uniting whole island chains. The presence of these significant landmasses could have served as stops or even supported populations during the dispersal of these animals. For this reason, it is believed that Mekosuchus only dispersed into the South Pacific relatively recently. Mead and colleagues name the Oligocene as the earliest possible date, though an even more recent Quaternary dispersal is deemed more likely.
Paleobiology
Mekosuchus, like some of its closest relatives, is believed to have been a terrestrial animal. Evidence for this may be found in several parts of its anatomy. The skull is altirostral, similar to extinct terrestrial forms like Notosuchians and members of the Planocraniidae, while semi-aquatic crocodilians typically have flattened platyrostral skulls, adapted to reduce drag and allow raising the eyes and nose out of the water without drawing the attention of potential prey items. In Mekosuchus, both the eyes and nares are not built for an aquatic mode of life. Rather than opening towards the top of the skull, which would allow the animal to breathe while remaining largely submerged, the nares open towards the front of the skull, and the eyes are similarly directed towards the sides, not the top. Balouet and Buffetaut further point to the well developed muscular insertions and the absence of freshwater in the deposition area, while pointing out that karstic environments are often associated with terrestrial crocodylomorphs. In 1995 Australian paleontologist Paul Willis informally suggested that animals like Mekosuchines may have filled a niche equivalent to modern monitor lizards, even going as far as to suggest arboreal (climbing) habits. However, this idea has been dismissed by more recent research, as monitor lizards had been present in Australia for longer than assumed by Willis, while analysis of mekosuchine toe bones showed no significant differences to those of other crocodilians and thus not supporting the notion that they were exceptional climbers.
The strong neck musculature inferred for Mekosuchus whitehunterensis has been interpreted as being an adaptation for ripping chunks of flesh from carcasses. In modern crocodilians this is achieved either through shaking the head side to side or by employing the death roll maneuver. It is noted that the small size of Mekosuchus would render the death roll maneuver less effective than in species with a body length between long, whereas headshaking is favored by small animals like juveniles. Furthermore, Stein, Archer and Hand argue that the well-developed epaxial musculature would primarily increase the force generated by headshaking whereas a death roll would bear a greater risk of the animal harming itself and damaging its limbs trying to perform it on land. Finally, M. whitehunterensis could have also used its neck musculature to strip flesh by pulling and lifting its head against a constrained or weighed down carcass, behavior that has also been inferred for more ancient archosaurs. Whether or not this mode of feeding was used to rip apart much larger prey items or utilized for scavenging is unclear, though Stein, Archer and Hand suggest that it may have been especially advantageous for the latter, allowing for even relatively small animals to consume an excess of food.
These mainland species are known from localities that have also preserved the fossilised remains of multiple other mekosuchines, which they may have coexisted with. The White Hunter Site that yielded M. whitehunterensis also preserved the broad-snouted generalist Baru wickeni and the narrower-snouted Ultrastenos as well as the terrestrial ziphodont Quinkana meboldi. The younger Ringtail Site of the Riversleigh on the other hand preserves another species of Baru, Mekosuchus sanderi and Trilophosuchus. How so many crocodilians could have coexisted with one another may have multiple explanations. On the one hand, the differing skull shapes between them, especially in regards to the White Hunter Site, may be enough for all taxa to fill different niches and thus not come into competition with one another. It is also possible that these assemblages were the result of thanatocoenosis and that in life, all these animals could have had different habitat preferences. However, Willis observed that the mammalian fauna of the Riversleigh WHA indicates a complex but clearly defined pattern of different ecomorphs that filled different niches. For this reason, he suggests that the Riversleigh crocodilians were truly sympatric. Willis does take particular note of Trilophosuchus, which was a box-headed terrestrial form similar to Mekosuchus and thus may have inhabited a similar niche as opposed to the much larger, semi-aquatic crocodilians of the site. It is however possible that they were morphologically and ecologically much more different than currently thought and that the similarities may simply be exacerbated by the lack of better material.
Unlike the bladed teeth of the mainland species, Mekosuchus inexpectatus had specialized back teeth more suited for cracking hard-shelled invertebrates such as molluscs, crustaceans and insects. Balouet and Buffetaut suggest that it may have fed on molluscs of the genus Placostylus, which was common on New Caledonia. Based on newer material and the previously noted similarities between Mekosuchus and modern dwarf crocodiles, Holt and colleagues suggest that M. inexpectatus could have possibly lived a similar lifestyle to the modern dwarf crocodiles (Osteolaemus spp.) or dwarf caimans (Paleosuchus spp.). According to their hypothesis, M. inexpectatus may have inhabited small, slow moving streams in the rainforests of New Caledonia and foraged at night near the waters edge and on land.
Extinction
The extinction of Mekosuchus in the South Pacific has historically been linked to the arrival of human settlers, in particular the Lapita people. Supporters of this hypothesis point at the fact that the range of Mekosuchus overlaps with human settlement of Vanuatu and the direct association between the bones of Mekosuchus kalpokasi with human artifacts at the Arapus archaeological site on the island of Efate. If the extinction of this taxon was linked to the arrival of humans, there may have been multiple factors contributing to their disappearance. These include the introduction of invasives like pigs and rats, habitat destruction and being used as a food source. However, this idea is not universally accepted and has been disputed by other researchers. Anderson and colleagues for instance note that in the case of Mekosuchus inexpectatus, most remains were deposited prior to human settlement of New Caledonia, with only a single mandible overlapping with human presence. They further highlight that no evidence exists of humans contributing to the crocodilians extinction.
| Biology and health sciences | Prehistoric crocodiles | Animals |
2397362 | https://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker%20conditions | Karush–Kuhn–Tucker conditions | In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.
Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers, which allows only equality constraints. Similar to the Lagrange approach, the constrained maximization (minimization) problem is rewritten as a Lagrange function whose optimal point is a global maximum or minimum over the domain of the choice variables and a global minimum (maximum) over the multipliers. The Karush–Kuhn–Tucker theorem is sometimes referred to as the saddle-point theorem.
The KKT conditions were originally named after Harold W. Kuhn and Albert W. Tucker, who first published the conditions in 1951. Later scholars discovered that the necessary conditions for this problem had been stated by William Karush in his master's thesis in 1939.
Nonlinear optimization problem
Consider the following nonlinear optimization problem in standard form:
minimize
subject to
where is the optimization variable chosen from a convex subset of , is the objective or utility function, are the inequality constraint functions and are the equality constraint functions. The numbers of inequalities and equalities are denoted by and respectively. Corresponding to the constrained optimization problem one can form the Lagrangian function
where
The Karush–Kuhn–Tucker theorem then states the following.
Since the idea of this approach is to find a supporting hyperplane on the feasible set , the proof of the Karush–Kuhn–Tucker theorem makes use of the hyperplane separation theorem.
The system of equations and inequalities corresponding to the KKT conditions is usually not solved directly, except in the few special cases where a closed-form solution can be derived analytically. In general, many optimization algorithms can be interpreted as methods for numerically solving the KKT system of equations and inequalities.
Necessary conditions
Suppose that the objective function and the constraint functions and have subderivatives at a point . If is a local optimum and the optimization problem satisfies some regularity conditions (see below), then there exist constants and , called KKT multipliers, such that the following four groups of conditions hold:
Stationarity
For minimizing :
For maximizing :
Primal feasibility
Dual feasibility
Complementary slackness
The last condition is sometimes written in the equivalent form:
In the particular case , i.e., when there are no inequality constraints, the KKT conditions turn into the Lagrange conditions, and the KKT multipliers are called Lagrange multipliers.
Proof
Interpretation: KKT conditions as balancing constraint-forces in state space
The primal problem can be interpreted as moving a particle in the space of , and subjecting it to three kinds of force fields:
is a potential field that the particle is minimizing. The force generated by is .
are one-sided constraint surfaces. The particle is allowed to move inside , but whenever it touches , it is pushed inwards.
are two-sided constraint surfaces. The particle is allowed to move only on the surface .
Primal stationarity states that the "force" of is exactly balanced by a linear sum of forces and .
Dual feasibility additionally states that all the forces must be one-sided, pointing inwards into the feasible set for .
Complementary slackness states that if , then the force coming from must be zero i.e., , since the particle is not on the boundary, the one-sided constraint force cannot activate.
Matrix representation
The necessary conditions can be written with Jacobian matrices of the constraint functions. Let be defined as and let be defined as . Let and . Then the necessary conditions can be written as:
Stationarity
For maximizing :
For minimizing :
Primal feasibility
Dual feasibility
Complementary slackness
Regularity conditions (or constraint qualifications)
One can ask whether a minimizer point of the original, constrained optimization problem (assuming one exists) has to satisfy the above KKT conditions. This is similar to asking under what conditions the minimizer of a function in an unconstrained problem has to satisfy the condition . For the constrained case, the situation is more complicated, and one can state a variety of (increasingly complicated) "regularity" conditions under which a constrained minimizer also satisfies the KKT conditions. Some common examples for conditions that guarantee this are tabulated in the following, with the LICQ the most frequently used one:
The strict implications can be shown
LICQ ⇒ MFCQ ⇒ CPLD ⇒ QNCQ
and
LICQ ⇒ CRCQ ⇒ CPLD ⇒ QNCQ
In practice weaker constraint qualifications are preferred since they apply to a broader selection of problems.
Sufficient conditions
In some cases, the necessary conditions are also sufficient for optimality. In general, the necessary conditions are not sufficient for optimality and additional information is required, such as the Second Order Sufficient Conditions (SOSC). For smooth functions, SOSC involve the second derivatives, which explains its name.
The necessary conditions are sufficient for optimality if the objective function of a maximization problem is a differentiable concave function, the inequality constraints are differentiable convex functions, the equality constraints are affine functions, and Slater's condition holds. Similarly, if the objective function of a minimization problem is a differentiable convex function, the necessary conditions are also sufficient for optimality.
It was shown by Martin in 1985 that the broader class of functions in which KKT conditions guarantees global optimality are the so-called Type 1 invex functions.
Second-order sufficient conditions
For smooth, non-linear optimization problems, a second order sufficient condition is given as follows.
The solution found in the above section is a constrained local minimum if for the Lagrangian,
then,
where is a vector satisfying the following,
where only those active inequality constraints corresponding to strict complementarity (i.e. where ) are applied. The solution is a strict constrained local minimum in the case the inequality is also strict.
If , the third order Taylor expansion of the Lagrangian should be used to verify if is a local minimum. The minimization of is a good counter-example, see also Peano surface.
Economics
Often in mathematical economics the KKT approach is used in theoretical models in order to obtain qualitative results. For example, consider a firm that maximizes its sales revenue subject to a minimum profit constraint. Letting be the quantity of output produced (to be chosen), be sales revenue with a positive first derivative and with a zero value at zero output, be production costs with a positive first derivative and with a non-negative value at zero output, and be the positive minimal acceptable level of profit, then the problem is a meaningful one if the revenue function levels off so it eventually is less steep than the cost function. The problem expressed in the previously given minimization form is
Minimize
subject to
and the KKT conditions are
Since would violate the minimum profit constraint, we have and hence the third condition implies that the first condition holds with equality. Solving that equality gives
Because it was given that and are strictly positive, this inequality along with the non-negativity condition on guarantees that is positive and so the revenue-maximizing firm operates at a level of output at which marginal revenue is less than marginal cost — a result that is of interest because it contrasts with the behavior of a profit maximizing firm, which operates at a level at which they are equal.
Value function
If we reconsider the optimization problem as a maximization problem with constant inequality constraints:
The value function is defined as
so the domain of is
Given this definition, each coefficient is the rate at which the value function increases as increases. Thus if each is interpreted as a resource constraint, the coefficients tell you how much increasing a resource will increase the optimum value of our function . This interpretation is especially important in economics and is used, for instance, in utility maximization problems.
Generalizations
With an extra multiplier , which may be zero (as long as ), in front of the KKT stationarity conditions turn into
which are called the Fritz John conditions. This optimality conditions holds without constraint qualifications and it is equivalent to the optimality condition KKT or (not-MFCQ).
The KKT conditions belong to a wider class of the first-order necessary conditions (FONC), which allow for non-smooth functions using subderivatives.
| Mathematics | Optimization | null |
16817594 | https://en.wikipedia.org/wiki/Tipping%20points%20in%20the%20climate%20system | Tipping points in the climate system | In climate science, a tipping point is a critical threshold that, when crossed, leads to large, accelerating and often irreversible changes in the climate system. If tipping points are crossed, they are likely to have severe impacts on human society and may accelerate global warming. Tipping behavior is found across the climate system, for example in ice sheets, mountain glaciers, circulation patterns in the ocean, in ecosystems, and the atmosphere. Examples of tipping points include thawing permafrost, which will release methane, a powerful greenhouse gas, or melting ice sheets and glaciers reducing Earth's albedo, which would warm the planet faster. Thawing permafrost is a threat multiplier because it holds roughly twice as much carbon as the amount currently circulating in the atmosphere.
Tipping points are often, but not necessarily, abrupt. For example, with average global warming somewhere between and , the Greenland ice sheet passes a tipping point and is doomed, but its melt would take place over millennia. Tipping points are possible at today's global warming of just over above preindustrial times, and highly probable above of global warming. It is possible that some tipping points are close to being crossed or have already been crossed, like those of the West Antarctic and Greenland ice sheets, the Amazon rainforest and warm-water coral reefs.
A danger is that if the tipping point in one system is crossed, this could cause a cascade of other tipping points, leading to severe, potentially catastrophic, impacts. Crossing a threshold in one part of the climate system may trigger another tipping element to tip into a new state. For example, ice loss in West Antarctica and Greenland will significantly alter ocean circulation. Sustained warming of the northern high latitudes as a result of this process could activate tipping elements in that region, such as permafrost degradation, and boreal forest dieback.
Scientists have identified many elements in the climate system which may have tipping points. As of September 2022, nine global core tipping elements and seven regional impact tipping elements are known. Out of those, one regional and three global climate elements will likely pass a tipping point if global warming reaches . They are the Greenland ice sheet collapse, West Antarctic ice sheet collapse, tropical coral reef die off, and boreal permafrost abrupt thaw.
Tipping points exists in a range of systems, for example in the cryosphere, within ocean currents, and in terrestrial systems. The tipping points in the cryosphere include: Greenland ice sheet disintegration, West Antarctic ice sheet disintegration, East Antarctic ice sheet disintegration, arctic sea ice decline, retreat of mountain glaciers, permafrost thaw. The tipping points for ocean current changes include the Atlantic Meridional Overturning Circulation (AMOC), the North Subpolar Gyre and the Southern Ocean overturning circulation. Lastly, the tipping points in terrestrial systems include Amazon rainforest dieback, boreal forest biome shift, Sahel greening, and vulnerable stores of tropical peat carbon.
Definition
The IPCC Sixth Assessment Report defines a tipping point as a "critical threshold beyond which a system reorganizes, often abruptly and/or irreversibly". It can be brought about by a small disturbance causing a disproportionately large change in the system. It can also be associated with self-reinforcing feedbacks, which could lead to changes in the climate system irreversible on a human timescale. For any particular climate component, the shift from one state to a new stable state may take many decades or centuries.
The 2019 IPCC Special Report on the Ocean and Cryosphere in a Changing Climate defines a tipping point as: "A level of change in system properties beyond which a system reorganises, often in a non-linear manner, and does not return to the initial state even if the drivers of the change are abated. For the climate system, the term refers to a critical threshold at which global or regional climate changes from one stable state to another stable state.".
In ecosystems and in social systems, a tipping point can trigger a regime shift, a major systems reorganisation into a new stable state. Such regime shifts need not be harmful. In the context of the climate crisis, the tipping point metaphor is sometimes used in a positive sense, such as to refer to shifts in public opinion in favor of action to mitigate climate change, or the potential for minor policy changes to rapidly accelerate the transition to a green economy.
Comparison of tipping points
Scientists have identified many elements in the climate system which may have tipping points. In the early 2000s the IPCC began considering the possibility of tipping points, originally referred to as large-scale discontinuities. At that time the IPCC concluded they would only be likely in the event of global warming of or more above preindustrial times, and another early assessment placed most tipping point thresholds at above 1980–1999 average warming. Since then estimates for global warming thresholds have generally fallen, with some thought to be possible in the Paris Agreement range () by 2016. As of 2021 tipping points are considered to have significant probability at today's warming level of just over , with high probability above of global warming. Some tipping points may be close to being crossed or have already been crossed, like those of the ice sheets in West Antarctic and Greenland, warm-water coral reefs, and the Amazon rainforest.
As of September 2022, nine global core tipping elements and seven regional impact tipping elements have been identified. Out of those, one regional and three global climate elements are estimated to likely pass a tipping point if global warming reaches , namely Greenland ice sheet collapse, West Antarctic ice sheet collapse, tropical coral reef die off, and boreal permafrost abrupt thaw. Two further tipping points are forecast as likely if warming continues to approach : Barents sea ice abrupt loss, and the Labrador sea subpolar gyre collapse.
Tipping points in the cryosphere
Greenland ice sheet disintegration
The Greenland ice sheet is the second largest ice sheet in the world, and the water which it holds, if completely melted, would raise sea levels globally by 7.2 metres (24 ft). Due to global warming, the ice sheet is melting at an accelerating rate, adding almost 1 mm to global sea levels every year. Around half of the ice loss occurs via surface melting, and the remainder occurs at the base of the ice sheet where it touches the sea, by calving (breaking off) icebergs from its margins.
The Greenland ice sheet has a tipping point because of the melt-elevation feedback. Surface melting reduces the height of the ice sheet, and air at a lower altitude is warmer. The ice sheet is then exposed to warmer temperatures, accelerating its melt. A 2021 analysis of sub-glacial sediment at the bottom of a Greenland ice core finds that the Greenland ice sheet melted away at least once during the last million years, and therefore strongly suggests that its tipping point is below the maximum temperature increase over the preindustrial conditions observed over that period. There is some evidence that the Greenland ice sheet is losing stability, and getting close to a tipping point.
West Antarctic ice sheet disintegration
The West Antarctic Ice Sheet (WAIS) is a large ice sheet in Antarctica; in places more than thick. It sits on bedrock mostly below sea level, having formed a deep subglacial basin due to the weight of the ice sheet over millions of years. As such, it is in contact with the heat from the ocean which makes it vulnerable to fast and irreversible ice loss. A tipping point could be reached once the WAIS's grounding lines (the point at which ice no longer sits on rock and becomes floating ice shelves) retreat behind the edge of the subglacial basin, resulting in self-sustaining retreat in to the deeper basin - a process known as the Marine Ice Sheet Instability (MISI). Thinning and collapse of the WAIS's ice shelves is helping to accelerate this grounding line retreat. If completely melted, the WAIS would contribute around of sea level rise over thousands of years.
Ice loss from the WAIS is accelerating, and some outlet glaciers are estimated to be close to or possibly already beyond the point of self-sustaining retreat. The paleo record suggests that during the past few hundred thousand years, the WAIS largely disappeared in response to similar levels of warming and emission scenarios projected for the next few centuries.
Like with the other ice sheets, there is a counteracting negative feedback - greater warming also intensifies the effects of climate change on the water cycle, which result in an increased precipitation over the ice sheet in the form of snow during the winter, which would freeze on the surface, and this increase in the surface mass balance (SMB) counteracts some fraction of the ice loss. In the IPCC Fifth Assessment Report, it was suggested that this effect could potentially overpower increased ice loss under the higher levels of warming and result in small net ice gain, but by the time of the IPCC Sixth Assessment Report, improved modelling had proven that the glacier breakup would consistently accelerate at a faster rate.
East Antarctic ice sheet disintegration
East Antarctic ice sheet is the largest and thickest ice sheet on Earth, with the maximum thickness of . A complete disintegration would raise the global sea levels by , but this may not occur until global warming of , while the loss of two-thirds of its volume may require at least of warming to trigger. Its melt would also occur over a longer timescale than the loss of any other ice on the planet, taking no less than 10,000 years to finish. However, the subglacial basin portions of the East Antarctic ice sheet may be vulnerable to tipping at lower levels of warming. The Wilkes Basin is of particular concern, as it holds enough ice to raise sea levels by about .
Arctic sea ice decline
Arctic sea ice was once identified as a potential tipping element. The loss of sunlight-reflecting sea ice during summer exposes the (dark) ocean, which would warm. Arctic sea ice cover is likely to melt entirely under even relatively low levels of warming, and it was hypothesised that this could eventually transfer enough heat to the ocean to prevent sea ice recovery even if the global warming is reversed. Modelling now shows that this heat transfer during the Arctic summer does not overcome the cooling and the formation of new ice during the Arctic winter. As such, the loss of Arctic ice during the summer is not a tipping point for as long as the Arctic winter remains cool enough to enable the formation of new Arctic sea ice. However, if the higher levels of warming prevent the formation of new Arctic ice even during winter, then this change may become irreversible. Consequently, Arctic Winter Sea Ice is included as a potential tipping point in a 2022 assessment.
Additionally, the same assessment argued that while the rest of the ice in the Arctic Ocean may recover from a total summertime loss during the winter, ice cover in the Barents Sea may not reform during the winter even below of warming. This is because the Barents Sea is already the fastest-warming part of the Arctic: in 2021-2022 it was found that while the warming within the Arctic Circle has already been nearly four times faster than the global average since 1979, Barents Sea warmed up to seven times faster than the global average. This tipping point matters because of the decade-long history of research into the connections between the state of Barents-Kara Sea ice and the weather patterns elsewhere in Eurasia.
Retreat of mountain glaciers
Mountain glaciers are the largest repository of land-bound ice after the Greenland and the Antarctica ice sheets, and they are also undergoing melting as the result of climate change. A glacier tipping point is when it enters a disequilibrium state with the climate and will melt away unless the temperatures go down. Examples include glaciers of the North Cascade Range, where even in 2005 67% of the glaciers observed were in disequilibrium and will not survive the continuation of the present climate, or the French Alps, where The Argentière and Mer de Glace glaciers are expected to disappear completely by end of the 21st century if current climate trends persist. Altogether, it was estimated in 2023 that 49% of the world's glaciers would be lost by 2100 at of global warming, and 83% of glaciers would be lost at . This would amount to one quarter and nearly half of mountain glacier *mass* loss, respectively, as only the largest, most resilient glaciers would survive the century. This ice loss would also contribute ~ and ~ to sea level rise, while the current likely trajectory of would result in the SLR contribution of ~ by 2100.
The absolute largest amount of glacier ice is located in the Hindu Kush Himalaya region, which is colloquially known as the Earth's Third Pole as the result. It is believed that one third of that ice will be lost by 2100 even if the warming is limited to , while the intermediate and severe climate change scenarios (Representative Concentration Pathways (RCP) 4.5 and 8.5) are likely to lead to the losses of 50% and >67% of the region's glaciers over the same timeframe. Glacier melt is projected to accelerate regional river flows until the amount of meltwater peaks around 2060, going into an irreversible decline afterwards. Since regional precipitation will continue to increase even as the glacier meltwater contribution declines, annual river flows are only expected to diminish in the western basins where contribution from the monsoon is low: however, irrigation and hydropower generation would still have to adjust to greater interannual variability and lower pre-monsoon flows in all of the region's rivers.
Permafrost thaw
Perennially frozen ground, or permafrost, covers large fractions of land – mainly in Siberia, Alaska, northern Canada and the Tibetan plateau – and can be up to a kilometre thick. Subsea permafrost up to 100 metres thick also occurs on the sea floor under part of the Arctic Ocean. This frozen ground holds vast amounts of carbon from plants and animals that have died and decomposed over thousands of years. Scientists believe there is nearly twice as much carbon in permafrost than is present in Earth's atmosphere.
As the climate warms and the permafrost begins to thaw, carbon dioxide and methane are released into the atmosphere. With higher temperatures, microbes become active and decompose the biological material in the permafrost, some of which is irreversibly lost. While most thaw is gradual and will take centuries, abrupt thaw can occur in some places where permafrost is rich in large ice masses, which once melted cause the ground to slump or form 'thermokarst' lakes over years to decades. These processes can become self-sustaining, leading to localised tipping dynamics, and could increase greenhouse gas emissions by around 40%. Because and methane are both greenhouse gases, they act as a self-reinforcing feedback on permafrost thaw, but are unlikely to lead to a global tipping point or runaway warming process.
Tipping points related to ocean current collapse
Atlantic Meridional Overturning Circulation (AMOC)
The Atlantic Meridional Overturning Circulation (AMOC), also known as the Gulf Stream System, is a large system of ocean currents. It is driven by differences in the density of water; colder and more salty water is heavier than warmer fresh water. The AMOC acts as a conveyor belt, sending warm surface water from the tropics north, and carrying cold fresh water back south. As warm water flows northwards, some evaporates which increases salinity. It also cools when it is exposed to cooler air. Cold, salty water is more dense and slowly begins to sink. Several kilometres below the surface, cold, dense water begins to move south. Increased rainfall and the melting of ice due to global warming dilutes the salty surface water, and warming further decreases its density. The lighter water is less able to sink, slowing down the circulation.
Theory, simplified models, and reconstructions of abrupt changes in the past suggest the AMOC has a tipping point. If freshwater input from melting glaciers reaches a certain threshold, it could collapse into a state of reduced flow. Even after melting stops, the AMOC may not return to its current state. It is unlikely that the AMOC will tip in the 21st century, but it may do so before 2300 if greenhouse gas emissions are very high. A weakening of 24% to 39% is expected depending on greenhouse emissions, even without tipping behaviour. If the AMOC does shut down, a new stable state could emerge that lasts for thousands of years, possibly triggering other tipping points.
In 2021, a study which used a primitive finite-difference ocean model estimated that AMOC collapse could be invoked by a sufficiently fast increase in ice melt even if it never reached the common thresholds for tipping obtained from slower change. Thus, it implied that the AMOC collapse is more likely than what is usually estimated by the complex and large-scale climate models. Another 2021 study found early-warning signals in a set of AMOC indices, suggesting that the AMOC may be close to tipping. However, it was contradicted by another study published in the same journal the following year, which found a largely stable AMOC which had so far not been affected by climate change beyond its own natural variability. Two more studies published in 2022 have also suggested that the modelling approaches commonly used to evaluate AMOC appear to overestimate the risk of its collapse.
North Subpolar Gyre
Southern Ocean overturning circulation
Tipping points in terrestrial systems
Amazon rainforest dieback
The Amazon rainforest is the largest tropical rainforest in the world. It is twice as big as India and spans nine countries in South America. It produces around half of its own rainfall by recycling moisture through evaporation and transpiration as air moves across the forest. This moisture recycling expands the area in which there is enough rainfall for rainforest to be maintained, and without it one model indicates around 40% of the current forest area would be too dry to sustain rainforest. However, when forest is lost via climate change (from droughts and wildfires) or deforestation, there will be less rain in downwind regions, increasing tree stress and mortality there. Eventually, if enough forest is lost a threshold can be reached beyond which large parts of the remaining rainforest may die off and transform into drier degraded forest or savanna landscapes, particularly in the drier south and east. In 2022, a study reported that the rainforest has been losing resilience since the early 2000s. Resilience is measured by recovery-time from short-term perturbations, with delayed return to equilibrium of the rainforest termed as critical slowing down. The observed loss of resilience reinforces the theory that the rainforest could be approaching a critical transition, although it cannot determine exactly when or if a tipping point will be reached.
Boreal forest biome shift
During the last quarter of the twentieth century, the zone of latitude occupied by taiga experienced some of the greatest temperature increases on Earth. Winter temperatures have increased more than summer temperatures. In summer, the daily low temperature has increased more than the daily high temperature. It has been hypothesised that the boreal environments have only a few states which are stable in the long term - a treeless tundra/steppe, a forest with >75% tree cover and an open woodland with ~20% and ~45% tree cover. Thus, continued climate change would be able to force at least some of the presently existing taiga forests into one of the two woodland states or even into a treeless steppe - but it could also shift tundra areas into woodland or forest states as they warm and become more suitable for tree growth.
These trends were first detected in the Canadian boreal forests in the early 2010s, and summer warming had also been shown to increase water stress and reduce tree growth in dry areas of the southern boreal forest in central Alaska and portions of far eastern Russia. In Siberia, the taiga is converting from predominantly needle-shedding larch trees to evergreen conifers in response to a warming climate.
Subsequent research in Canada found that even in the forests where biomass trends did not change, there was a substantial shift towards the deciduous broad-leaved trees with higher drought tolerance over the past 65 years. A Landsat analysis of 100,000 undisturbed sites found that the areas with low tree cover became greener in response to warming, but tree mortality (browning) became the dominant response as the proportion of existing tree cover increased. A 2018 study of the seven tree species dominant in the Eastern Canadian forests found that while warming alone increases their growth by around 13% on average, water availability is much more important than temperature. Also, further warming of up to would result in substantial declines unless matched by increases in precipitation.
A 2021 paper had confirmed that the boreal forests are much more strongly affected by climate change than the other forest types in Canada and projected that most of the eastern Canadian boreal forests would reach a tipping point around 2080 under the RCP 8.5 scenario, which represents the largest potential increase in anthropogenic emissions. Another 2021 study projected that under the moderate SSP2-4.5 scenario, boreal forests would experience a 15% worldwide increase in biomass by the end of the century, but this would be more than offset by the 41% biomass decline in the tropics. In 2022, the results of a 5-year warming experiment in North America had shown that the juveniles of tree species which currently dominate the southern margins of the boreal forests fare the worst in response to even or of warming and the associated reductions in precipitation. While the temperate species which would benefit from such conditions are also present in the southern boreal forests, they are both rare and have slower growth rates.
Sahel greening
The Special Report on Global Warming of 1.5 °C and the IPCC Fifth Assessment Report indicate that global warming will likely result in increased precipitation across most of East Africa, parts of Central Africa and the principal wet season of West Africa. However, there is significant uncertainty related to these projections especially for West Africa.Currently, the Sahel is becoming greener but precipitation has not fully recovered to levels reached in the mid-20th century.
A study from 2022 concluded: "Clearly the existence of a future tipping threshold for the WAM (West African Monsoon) and Sahel remains uncertain as does its sign but given multiple past abrupt shifts, known weaknesses in current models, and huge regional impacts but modest global climate feedback, we retain the Sahel/WAM as a potential regional impact tipping element (low confidence)."
Some simulations of global warming and increased carbon dioxide concentrations have shown a substantial increase in precipitation in the Sahel/Sahara. This and the increased plant growth directly induced by carbon dioxide could lead to an expansion of vegetation into present-day desert, although it might be accompanied by a northward shift of the desert, i.e. a drying of northernmost Africa.
Vulnerable stores of tropical peat carbon: Cuvette Centrale peatland
Other tipping points
Coral reef die-off
Around 500 million people around the world depend on coral reefs for food, income, tourism and coastal protection. Since the 1980s, this is being threatened by the increase in sea surface temperatures which is triggering mass bleaching of coral, especially in sub-tropical regions. A sustained ocean temperature spike of above average is enough to cause bleaching. Under heat stress, corals expel the small colourful algae which live in their tissues, which causes them to turn white. The algae, known as zooxanthellae, have a symbiotic relationship with coral such that without them, the corals slowly die. After these zooxanthellae have disappeared, the corals are vulnerable to a transition towards a seaweed-dominated ecosystem, making it very difficult to shift back to a coral-dominated ecosystem. The IPCC estimates that by the time temperatures have risen to above pre-industrial times, Coral reefs... are projected to decline by a further 70–90% at 1.5 °C; and that if the world warms by , they will become extremely rare.
Break-up of equatorial stratocumulus clouds
Cascading tipping points
Crossing a threshold in one part of the climate system may trigger another tipping element to tip into a new state. Such sequences of thresholds are called cascading tipping points, an example of a domino effect. Ice loss in West Antarctica and Greenland will significantly alter ocean circulation. Sustained warming of the northern high latitudes as a result of this process could activate tipping elements in that region, such as permafrost degradation, and boreal forest dieback. Thawing permafrost is a threat multiplier because it holds roughly twice as much carbon as the amount currently circulating in the atmosphere. Loss of ice in Greenland likely destabilises the West Antarctic ice sheet via sea level rise, and vice-versa, especially if Greenland were to melt first as West Antarctica is particularly vulnerable to contact with warm sea water.
A 2021 study with three million computer simulations of a climate model showed that nearly one-third of those simulations resulted in domino effects, even when temperature increases were limited to – the upper limit set by the Paris Agreement in 2015. The authors of the study said that the science of tipping points is so complex that there is great uncertainty as to how they might unfold, but nevertheless, argued that the possibility of cascading tipping points represents "an existential threat to civilisation". A network model analysis suggested that temporary overshoots of climate change – increasing global temperature beyond Paris Agreement goals temporarily as often projected – can substantially increase risks of climate tipping cascades ("by up to 72% compared with non-overshoot scenarios").
Formerly considered tipping elements
The possibility that the El Niño–Southern Oscillation (ENSO) is a tipping element had attracted attention in the past. Normally strong winds blow west across the South Pacific Ocean from South America to Australia. Every two to seven years, the winds weaken due to pressure changes and the air and water in the middle of the Pacific warms up, causing changes in wind movement patterns around the globe. This is known as El Niño and typically leads to droughts in India, Indonesia and Brazil, and increased flooding in Peru. In 2015/2016, this caused food shortages affecting over 60 million people. El Niño-induced droughts may increase the likelihood of forest fires in the Amazon. The threshold for tipping was estimated to be between and of global warming in 2016. After tipping, the system would be in a more permanent El Niño state, rather than oscillating between different states. This has happened in Earth's past, in the Pliocene, but the layout of the ocean was significantly different from now. So far, there is no definitive evidence indicating changes in ENSO behaviour, and the IPCC Sixth Assessment Report concluded that it is "virtually certain that the ENSO will remain the dominant mode of interannual variability in a warmer world." Consequently, the 2022 assessment no longer includes it in the list of likely tipping elements.
The Indian summer monsoon is another part of the climate system which was considered suspectible to irreversible collapse in the earlier research. However, more recent research has demonstrated that warming tends to strengthen the Indian monsoon, and it is projected to strengthen in the future.
Methane hydrate deposits in the Arctic were once thought to be vulnerable to a rapid dissociation which would have a large impact on global temperatures, in a dramatic scenario known as a clathrate gun hypothesis. Later research found that it takes millennia for methane hydrates to respond to warming, while methane emissions from the seafloor rarely transfer from the water column into the atmosphere. IPCC Sixth Assessment Report states "It is very unlikely that gas clathrates (mostly methane) in deeper terrestrial permafrost and subsea clathrates will lead to a detectable departure from the emissions trajectory during this century".
Mathematical theory
Tipping point behaviour in the climate can be described in mathematical terms. Three types of tipping points have been identified—bifurcation, noise-induced and rate-dependent.
Bifurcation-induced tipping
Bifurcation-induced tipping happens when a particular parameter in the climate (for instance a change in environmental conditions or forcing), passes a critical level – at which point a bifurcation takes place – and what was a stable state loses its stability or simply disappears. The Atlantic Meridional Overturning Circulation (AMOC) is an example of a tipping element that can show bifurcation-induced tipping. Slow changes to the bifurcation parameters in this system – the salinity and temperature of the water – may push the circulation towards collapse.
Many types of bifurcations show hysteresis, which is the dependence of the state of a system on its history. For instance, depending on how warm it was in the past, there can be differing amounts of ice on the poles at the same concentration of greenhouse gases or temperature.
Early warning signals
For tipping points that occur because of a bifurcation, it may be possible to detect whether a system is getting closer to a tipping point, as it becomes less resilient to perturbations on approach of the tipping threshold. These systems display critical slowing down, with an increased memory (rising autocorrelation) and variance. Depending on the nature of the tipping system, there may be other types of early warning signals. Abrupt change is not an early warning signal (EWS) for tipping points, as abrupt change can also occur if the changes are reversible to the control parameter.
These EWSs are often developed and tested using time series from the paleo record, like sediments, ice caps, and tree rings, where past examples of tipping can be observed. It is not always possible to say whether increased variance and autocorrelation is a precursor to tipping, or caused by internal variability, for instance in the case of the collapse of the AMOC. Quality limitations of paleodata further complicate the development of EWSs. They have been developed for detecting tipping due to drought in forests in California, and melting of the Pine Island Glacier in West Antarctica, among other systems. Using early warning signals (increased autocorrelation and variance of the melt rate time series), it has been suggested that the Greenland ice sheet is currently losing resilience, consistent with modelled early warning signals of the ice sheet.
Human-induced changes in the climate system may be too fast for early warning signals to become evident, especially in systems with inertia.
Noise-induced tipping
Noise-induced tipping is the transition from one state to another due to random fluctuations or internal variability of the system. Noise-induced transitions do not show any of the early warning signals which occur with bifurcations. This means they are unpredictable because the underlying potential does not change. Because they are unpredictable, such occurrences are often described as a "one-in-x-year" event. An example is the Dansgaard–Oeschger events during the last ice age, with 25 occurrences of sudden climate fluctuations over a 500-year period.
Rate-induced tipping
Rate-induced tipping occurs when a change in the environment is faster than the force that restores the system to its stable state. In peatlands, for instance, after years of relative stability, rate-induced tipping can lead to an "explosive release of soil carbon from peatlands into the atmosphere" – sometimes known as "compost bomb instability". The AMOC may also show rate-induced tipping: if the rate of ice melt increases too fast, it may collapse, even before the ice melt reaches the critical value where the system would undergo a bifurcation.
Potential impacts
Tipping points can have very severe impacts. They can exacerbate current dangerous impacts of climate change, or give rise to new impacts. Some potential tipping points would take place abruptly, such as disruptions to the Indian monsoon, with severe impacts on food security for hundreds of millions. Other impacts would likely take place over longer timescales, such as the melting of the ice caps. The circa of sea level rise from the combined melt of Greenland and West Antarctica would require moving many cities inland over the course of centuries, but would also accelerate sea level rise this century, with Antarctic ice sheet instability projected to expose 120 million more people to annual floods in a mid-emissions scenario. A collapse of the Atlantic Overturning Circulation would cause over 10 degrees Celsius of cooling in parts of Europe, cause drying in Europe, Central America, West Africa, and southern Asia, and lead to about of sea level rise in the North Atlantic. The impacts of AMOC collapse would have serious implications for food security, with one projection showing reduced yields of key crops across most world regions, with for example arable agriculture becoming economically infeasible in Britain. These impacts could happen simultaneously in the case of cascading tipping points. A review of abrupt changes over the last 30,000 years showed that tipping points can lead to a large set of cascading impacts in climate, ecological and social systems. For instance, the abrupt termination of the African humid period cascaded, and desertification and regime shifts led to the retreat of pastoral societies in North Africa and a change of dynasty in Egypt.
Some scholars have proposed a threshold which, if crossed, could trigger multiple tipping points and self-reinforcing feedback loops that would prevent stabilisation of the climate, causing much greater warming and sea-level rises and leading to severe disruption to ecosystems, society, and economies. This scenario is sometimes called the Hothouse Earth scenario. The researchers proposed that this scenario could unfold beyond a threshold of around 2 °C above pre-industrial levels. However, while this scenario is possible, the existence and value of this threshold remains speculative, and doubts have been raised if tipping points would lock in much extra warming in the shorter term. Decisions taken over the next decade could influence the climate of the planet for tens to hundreds of thousands of years and potentially even lead to conditions which are inhospitable to current human societies. The report also states that there is a possibility of a cascade of tipping points being triggered even if the goal outlined in the Paris Agreement to limit warming to is achieved.
Geological timescales
The geological record shows many abrupt changes that suggest tipping points may have been crossed in pre-historic times. For instance, the Dansgaard–Oeschger events during the last ice age were periods of abrupt warming (within decades) in Greenland and Europe, that may have involved the abrupt changes in major ocean currents. During the deglaciation in the early Holocene, sea level rise was not smooth, but rose abruptly during meltwater pulses. The monsoon in North Africa saw abrupt changes on decadal timescales during the African humid period. This period, spanning from 15,000 to 5,000 years ago, also ended suddenly in a drier state.
Runaway greenhouse effect
A runaway greenhouse effect is a tipping point so extreme that oceans evaporate and the water vapour escapes to space, an irreversible climate state that happened on Venus. A runaway greenhouse effect has virtually no chance of being caused by people. Venus-like conditions on the Earth require a large long-term forcing that is unlikely to occur until the sun brightens by a ten of percents, which will take 600 - 700 million years.
| Physical sciences | Climate change | Earth science |
7721179 | https://en.wikipedia.org/wiki/Laika%20%28dog%20type%29 | Laika (dog type) | Laikas ( ; ) are aboriginal spitz from Northern Russia, especially Siberia but also sometimes expanded to include Nordic hunting breeds. Laika breeds are primitive dogs who flourish with minimal care even in hostile weather. Generally, laika breeds are expected to be versatile hunting dogs, capable of hunting game of a variety of sizes by treeing small game, pointing and baying larger game and working as teams to corner bear and boar. However a few laikas have specialized as herding or sled dogs.
Definition
The Russian word () is a noun derived from the verb (, to bark), and literally means barker. As the name of a dog variety, it is used not only in Russian cynological literature, but sometimes in other languages as well to refer to all varieties of hunting dogs traditionally kept by the peoples of the northern Russia and adjacent areas. This includes not only the three or four breeds known as Laikas in English, but also other standard breeds that the FCI classifies together with them as "Nordic Hunting Dogs" (Group 5, Section 2 of the FCI classification).
Indeed the word is often used to refer not only to hunting dogs but also to the related sled dog breeds of the tundra belt, which the FCI classifies as "Nordic Sled Dogs" and even occasionally all spitz breeds.
History
The debate as to what dogs should be considered laika is as old as Russian cynology. Two of the first known published works on laikas were Prince A. A. Shirinsky-Shikhmatov's groundbreaking illustrated book, "Album of Northern Dogs (Laikas)" and M.G. Dmitrieva-Sulima's book, "The Laika and Hunting With It". An avid bear hunter, Prince Shirinsky-Shikhmatov is described as "Being much interested in the natural sagacity and hunting capacity of the laïkas he procured some hundreds of specimens of different varieties and applied himself seriously to their study and breeding." Prince Shirinsky-Shikhmatov cataloged 13 breeds of laikas: Zyryan, Finno-Karelian, Vogul, Cheremis, Ostyak, Tungus, Votyak, Galician, Norvegian, Buryatian, Soyotian, Laplandian and Samoyed Laika. However, sportswoman and author M. G. Dmitrieva-Sulima considered the term "Northern Dog" to be the most appropriate name to apply to this numerous group of dogs. She also admits that the even the term "northern" would also be not quite precise, because dogs of similar type also occurred in Africa, America and everywhere in Asia. Dmitrieva-Sulima would go on to name 19 additional laika breeds, raising the grand total to 30: Kevrolian, Olonets, Kyrghyz, Yakut, Koryak, Orochon,Gilyak, Bashkir, Mongolian, Chukotka, Golds and Yukagir Laikas, Tomsk, Vilyui, Berezovo-Surgut, Kolyma, Pechora Laika, and the Polar Dog.
Regardless of the exact count of laika breeds, all contemporary writers speak of the reverence that local ethnic groups held these dogs. Russian ethnographer Vladimir Jochelson writes "The sled dog is at the same time a hunting dog, with a well-developed sense of smell, but with better hearing and sight. Almost all year round on a leash, but left to themselves, they are perfectly able to find food in the form of mice, partridges, ducks and other birds and small animals.”
During the Soviet era, there was a push to classify dogs by their specialization as well as merge similar local dogs into large geographic zones. Thus, many experts began to consider laikas to be strictly dogs utilized for as pointing hunters and exclude herding and sled laika altogether. However, this proved problematic as the primitive nature of laikas resulted in less specialization than seen in other breeds and the sheer scale of these regions made it difficult to produce a uniform dog within the zones. Nevertheless in 1949, standards of four breeds of hunting Laikas were approved: the Karelo-Finnish Laika, Russo-European Laika, West Siberian Laika and East Siberian Laika. In 1952, the Cynological Soviet of Glavokhota of the Russian Federation approved permanent breed standards of the first three breeds. Meanwhile sled dogs were divided into two types, the smaller western Samoyed and the larger Northeastern Hauling Laika, of which the first was permanently recognized. The popularity of pedigreed dogs combined with a systemic campaign by officials to eliminate aboriginal dogs, resulted in a collapse of unrecognized local aboriginal laika. In addition, the introduction of mechanized travel as well as decline in fur hunting and local fishing further hastened the decline of laikas.
The collapse of the Soviet Union cleared room for additional laikas to be recognized as purebred, and in 1992, the Kamchatka Laika and the Chukotka sled dog gained recognition as purebred by the Russian Cynological Federation, followed by the Nenets Herding Laika in 1994. In 2004, the Yakutian Laika was adopted. Many of the laikas identified at the beginning of the 20th century are now thought to be lost including the Gilyak Laika (Sakhalin husky) and the Yukaghir Laika.
Despite the visual similarity amongst the laika breeds, genetic analysis shows little genetic connection with the similar breeds in the adjacent geographical areas of Russia and Scandinavia.
Breeds commonly recognized as Laikas
| Biology and health sciences | Dogs | Animals |
7726759 | https://en.wikipedia.org/wiki/Intersection%20graph | Intersection graph | In graph theory, an intersection graph is a graph that represents the pattern of intersections of a family of sets. Any graph can be represented as an intersection graph, but some important special classes of graphs can be defined by the types of sets that are used to form an intersection representation of them.
Formal definition
Formally, an intersection graph is an undirected graph formed from a family of sets
by creating one vertex for each set , and connecting two vertices and by an edge whenever the corresponding two sets have a nonempty intersection, that is,
All graphs are intersection graphs
Any undirected graph may be represented as an intersection graph. For each vertex of , form a set consisting of the edges incident to ; then two such sets have a nonempty intersection if and only if the corresponding vertices share an edge. Therefore, is the intersection graph of the sets .
provide a construction that is more efficient, in the sense that it requires a smaller total number of elements in all of the sets combined. For it, the total number of set elements is at most , where is the number of vertices in the graph. They credit the observation that all graphs are intersection graphs to , but say to see also . The intersection number of a graph is the minimum total number of elements in any intersection representation of the graph.
Classes of intersection graphs
Many important graph families can be described as intersection graphs of more restricted types of set families, for instance sets derived from some kind of geometric configuration:
An interval graph is defined as the intersection graph of intervals on the real line, or of connected subgraphs of a path graph.
An indifference graph may be defined as the intersection graph of unit intervals on the real line
A circular arc graph is defined as the intersection graph of arcs on a circle.
A polygon-circle graph is defined as the intersection of polygons with corners on a circle.
One characterization of a chordal graph is as the intersection graph of connected subgraphs of a tree.
A trapezoid graph is defined as the intersection graph of trapezoids formed from two parallel lines. They are a generalization of the notion of permutation graph, in turn they are a special case of the family of the complements of comparability graphs known as cocomparability graphs.
A unit disk graph is defined as the intersection graph of unit disks in the plane.
A circle graph is the intersection graph of a set of chords of a circle.
The circle packing theorem states that planar graphs are exactly the intersection graphs of families of closed disks in the plane bounded by non-crossing circles.
Scheinerman's conjecture (now a theorem) states that every planar graph can also be represented as an intersection graph of line segments in the plane. However, intersection graphs of line segments may be nonplanar as well, and recognizing intersection graphs of line segments is complete for the existential theory of the reals .
The line graph of a graph G is defined as the intersection graph of the edges of G, where we represent each edge as the set of its two endpoints.
A string graph is the intersection graph of curves on a plane.
A graph has boxicity k if it is the intersection graph of multidimensional boxes of dimension k, but not of any smaller dimension.
A clique graph is the intersection graph of maximal cliques of another graph
A block graph of clique tree is the intersection graph of biconnected components of another graph
characterized the intersection classes of graphs, families of finite graphs that can be described as the intersection graphs of sets drawn from a given family of sets. It is necessary and sufficient that the family have the following properties:
Every induced subgraph of a graph in the family must also be in the family.
Every graph formed from a graph in the family by replacing a vertex by a clique must also belong to the family.
There exists an infinite sequence of graphs in the family, each of which is an induced subgraph of the next graph in the sequence, with the property that every graph in the family is an induced subgraph of a graph in the sequence.
If the intersection graph representations have the additional requirement that different vertices must be represented by different sets, then the clique expansion property can be omitted.
Related concepts
An order-theoretic analog to the intersection graphs are the inclusion orders. In the same way that an intersection representation of a graph labels every vertex with a set so that vertices are adjacent if and only if their sets have nonempty intersection, so an inclusion representation f of a poset labels every element with a set so that for any x and y in the poset, x ≤ y if and only if f(x) ⊆ f(y).
| Mathematics | Graph theory | null |
7728392 | https://en.wikipedia.org/wiki/Entropy%20%28order%20and%20disorder%29 | Entropy (order and disorder) | In thermodynamics, entropy is often associated with the amount of order or disorder in a thermodynamic system. This stems from Rudolf Clausius' 1862 assertion that any thermodynamic process always "admits to being reduced [reduction] to the alteration in some way or another of the arrangement of the constituent parts of the working body" and that internal work associated with these alterations is quantified energetically by a measure of "entropy" change, according to the following differential expression:
where = motional energy ("heat") that is transferred reversibly to the system from the surroundings and = the absolute temperature at which the transfer occurs.
In the years to follow, Ludwig Boltzmann translated these 'alterations of arrangement' into a probabilistic view of order and disorder in gas-phase molecular systems. In the context of entropy, "perfect internal disorder" has often been regarded as describing thermodynamic equilibrium, but since the thermodynamic concept is so far from everyday thinking, the use of the term in physics and chemistry has caused much confusion and misunderstanding.
In recent years, to interpret the concept of entropy, by further describing the 'alterations of arrangement', there has been a shift away from the words 'order' and 'disorder', to words such as 'spread' and 'dispersal'.
History
This "molecular ordering" entropy perspective traces its origins to molecular movement interpretations developed by Rudolf Clausius in the 1850s, particularly with his 1862 visual conception of molecular disgregation. Similarly, in 1859, after reading a paper on the diffusion of molecules by Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics.
In 1864, Ludwig Boltzmann, a young student in Vienna, came across Maxwell's paper and was so inspired by it that he spent much of his long and distinguished life developing the subject further. Later, Boltzmann, in efforts to develop a kinetic theory for the behavior of a gas, applied the laws of probability to Maxwell's and Clausius' molecular interpretation of entropy so as to begin to interpret entropy in terms of order and disorder. Similarly, in 1882 Hermann von Helmholtz used the word "Unordnung" (disorder) to describe entropy.
Overview
To highlight the fact that order and disorder are commonly understood to be measured in terms of entropy, below are current science encyclopedia and science dictionary definitions of entropy:
A measure of the unavailability of a system's energy to do work; also a measure of disorder; the higher the entropy the greater the disorder.
A measure of disorder; the higher the entropy the greater the disorder.
In thermodynamics, a parameter representing the state of disorder of a system at the atomic, ionic, or molecular level; the greater the disorder the higher the entropy.
A measure of disorder in the universe or of the unavailability of the energy in a system to do work.
Entropy and disorder also have associations with equilibrium. Technically, entropy, from this perspective, is defined as a thermodynamic property which serves as a measure of how close a system is to equilibrium—that is, to perfect internal disorder. Likewise, the value of the entropy of a distribution of atoms and molecules in a thermodynamic system is a measure of the disorder in the arrangements of its particles. In a stretched out piece of rubber, for example, the arrangement of the molecules of its structure has an "ordered" distribution and has zero entropy, while the "disordered" kinky distribution of the atoms and molecules in the rubber in the non-stretched state has positive entropy. Similarly, in a gas, the order is perfect and the measure of entropy of the system has its lowest value when all the molecules are in one place, whereas when more points are occupied the gas is all the more disorderly and the measure of the entropy of the system has its largest value.
In systems ecology, as another example, the entropy of a collection of items comprising a system is defined as a measure of their disorder or equivalently the relative likelihood of the instantaneous configuration of the items. Moreover, according to theoretical ecologist and chemical engineer Robert Ulanowicz, "that entropy might provide a quantification of the heretofore subjective notion of disorder has spawned innumerable scientific and philosophical narratives." In particular, many biologists have taken to speaking in terms of the entropy of an organism, or about its antonym negentropy, as a measure of the structural order within an organism.
The mathematical basis with respect to the association entropy has with order and disorder began, essentially, with the famous Boltzmann formula, , which relates entropy S to the number of possible states W in which a system can be found. As an example, consider a box that is divided into two sections. What is the probability that a certain number, or all of the particles, will be found in one section versus the other when the particles are randomly allocated to different places within the box? If you only have one particle, then that system of one particle can subsist in two states, one side of the box versus the other. If you have more than one particle, or define states as being further locational subdivisions of the box, the entropy is larger because the number of states is greater. The relationship between entropy, order, and disorder in the Boltzmann equation is so clear among physicists that according to the views of thermodynamic ecologists Sven Jorgensen and Yuri Svirezhev, "it is obvious that entropy is a measure of order or, most likely, disorder in the system." In this direction, the second law of thermodynamics, as famously enunciated by Rudolf Clausius in 1865, states that:
Thus, if entropy is associated with disorder and if the entropy of the universe is headed towards maximal entropy, then many are often puzzled as to the nature of the "ordering" process and operation of evolution in relation to Clausius' most famous version of the second law, which states that the universe is headed towards maximal "disorder". In the recent 2003 book SYNC – the Emerging Science of Spontaneous Order by Steven Strogatz, for example, we find "Scientists have often been baffled by the existence of spontaneous order in the universe. The laws of thermodynamics seem to dictate the opposite, that nature should inexorably degenerate toward a state of greater disorder, greater entropy. Yet all around us we see magnificent structures—galaxies, cells, ecosystems, human beings—that have all somehow managed to assemble themselves."
The common argument used to explain this is that, locally, entropy can be lowered by external action, e.g. solar heating action, and that this applies to machines, such as a refrigerator, where the entropy in the cold chamber is being reduced, to growing crystals, and to living organisms. This local increase in order is, however, only possible at the expense of an entropy increase in the surroundings; here more disorder must be created. The conditioner of this statement suffices that living systems are open systems in which both heat, mass, and or work may transfer into or out of the system. Unlike temperature, the putative entropy of a living system would drastically change if the organism were thermodynamically isolated. If an organism was in this type of "isolated" situation, its entropy would increase markedly as the once-living components of the organism decayed to an unrecognizable mass.
Phase change
Owing to these early developments, the typical example of entropy change ΔS is that associated with phase change. In solids, for example, which are typically ordered on the molecular scale, usually have smaller entropy than liquids, and liquids have smaller entropy than gases and colder gases have smaller entropy than hotter gases. Moreover, according to the third law of thermodynamics, at absolute zero temperature, crystalline structures are approximated to have perfect "order" and zero entropy. This correlation occurs because the numbers of different microscopic quantum energy states available to an ordered system are usually much smaller than the number of states available to a system that appears to be disordered.
From his famous 1896 Lectures on Gas Theory, Boltzmann diagrams the structure of a solid body, as shown above, by postulating that each molecule in the body has a "rest position". According to Boltzmann, if it approaches a neighbor molecule it is repelled by it, but if it moves farther away there is an attraction. This, of course was a revolutionary perspective in its time; many, during these years, did not believe in the existence of either atoms or molecules (see: history of the molecule). According to these early views, and others such as those developed by William Thomson, if energy in the form of heat is added to a solid, so to make it into a liquid or a gas, a common depiction is that the ordering of the atoms and molecules becomes more random and chaotic with an increase in temperature:
Thus, according to Boltzmann, owing to increases in thermal motion, whenever heat is added to a working substance, the rest position of molecules will be pushed apart, the body will expand, and this will create more molar-disordered distributions and arrangements of molecules. These disordered arrangements, subsequently, correlate, via probability arguments, to an increase in the measure of entropy.
Entropy-driven order
Entropy has been historically, e.g. by Clausius and Helmholtz, associated with disorder. However, in common speech, order is used to describe organization, structural regularity, or form, like that found in a crystal compared with a gas. This commonplace notion of order is described quantitatively by Landau theory. In Landau theory, the development of order in the everyday sense coincides with the change in the value of a mathematical quantity, a so-called order parameter. An example of an order parameter for crystallization is "bond orientational order" describing the development of preferred directions (the crystallographic axes) in space. For many systems, phases with more structural (e.g. crystalline) order exhibit less entropy than fluid phases under the same thermodynamic conditions. In these cases, labeling phases as ordered or disordered according to the relative amount of entropy (per the Clausius/Helmholtz notion of order/disorder) or via the existence of structural regularity (per the Landau notion of order/disorder) produces matching labels.
However, there is a broad class of systems that manifest entropy-driven order, in which phases with organization or structural regularity, e.g. crystals, have higher entropy than structurally disordered (e.g. fluid) phases under the same thermodynamic conditions. In these systems phases that would be labeled as disordered by virtue of their higher entropy (in the sense of Clausius or Helmholtz) are ordered in both the everyday sense and in Landau theory.
Under suitable thermodynamic conditions, entropy has been predicted or discovered to induce systems to form ordered liquid-crystals, crystals, and quasicrystals. In many systems, directional entropic forces drive this behavior. More recently, it has been shown it is possible to precisely engineer particles for target ordered structures.
Adiabatic demagnetization
In the quest for ultra-cold temperatures, a temperature lowering technique called adiabatic demagnetization is used, where atomic entropy considerations are utilized which can be described in order-disorder terms. In this process, a sample of solid such as chrome-alum salt, whose molecules are equivalent to tiny magnets, is inside an insulated enclosure cooled to a low temperature, typically 2 or 4 kelvins, with a strong magnetic field being applied to the container using a powerful external magnet, so that the tiny molecular magnets are aligned forming a well-ordered "initial" state at that low temperature. This magnetic alignment means that the magnetic energy of each molecule is minimal. The external magnetic field is then reduced, a removal that is considered to be closely reversible. Following this reduction, the atomic magnets then assume random less-ordered orientations, owing to thermal agitations, in the "final" state:
The "disorder" and hence the entropy associated with the change in the atomic alignments has clearly increased. In terms of energy flow, the movement from a magnetically aligned state requires energy from the thermal motion of the molecules, converting thermal energy into magnetic energy. Yet, according to the second law of thermodynamics, because no heat can enter or leave the container, due to its adiabatic insulation, the system should exhibit no change in entropy, i.e. ΔS = 0. The increase in disorder, however, associated with the randomizing directions of the atomic magnets represents an entropy increase? To compensate for this, the disorder (entropy) associated with the temperature of the specimen must decrease by the same amount. The temperature thus falls as a result of this process of thermal energy being converted into magnetic energy. If the magnetic field is then increased, the temperature rises and the magnetic salt has to be cooled again using a cold material such as liquid helium.
Difficulties with the term "disorder"
In recent years the long-standing use of term "disorder" to discuss entropy has met with some criticism. Critics of the terminology state that entropy is not a measure of 'disorder' or 'chaos', but rather a measure of energy's diffusion or dispersal to more microstates. Shannon's use of the term 'entropy' in information theory refers to the most compressed, or least dispersed, amount of code needed to encompass the content of a signal.
| Physical sciences | Statistical mechanics | Physics |
12385122 | https://en.wikipedia.org/wiki/Semi-rigid%20airship | Semi-rigid airship | A semi-rigid airship is an airship which has a stiff keel or truss supporting the main envelope along its length. The keel may be partially flexible or articulated and may be located inside or outside the main envelope. The outer shape of the airship is maintained by gas pressure, as with the non-rigid "blimp". Semi-rigid dirigibles were built in significant quantity from the late 19th century but in the late 1930s they fell out of favour along with rigid airships. No more were constructed until the semi-rigid design was revived by the Zeppelin NT in 1997.
Semi-rigid construction is lighter-weight than the outer framework of a rigid airship, while it allows greater loading than a non-rigid type.
Principle
More or less integrally attached to the hull are the gondola, engines and sometimes the empennage (tail). The framework has the task of distributing the suspension loads of these attachments and the lifting gas loads evenly throughout the whole hull's surface and may also partially relieve stresses on the hull during manoeuvres. In early airships which relied on nets, fabric bands, or complicated systems of rope rigging to unite the lifting envelope with the other parts of the ship, semi-rigid construction was able to achieve improvements in weight, aerodynamic, and structural performance. The boundary between semi-rigid and non-rigid airships is vague. Especially with small types, it is unclear whether the structure is merely an extended gondola or a proper structural keel.
As in non-rigid airships, the hull's aerodynamic shape is maintained by an overpressure of the gas inside and light framework at the nose and tail. Changes in volume of the lifting gas are balanced using ballonets (air-filled bags). Ballonets also may serve to provide pitch control. For small types the lifting gas is sometimes held in the hull itself, while larger types tend to use separate gas cells, mitigating the consequences of a single gas cell failure and helping to reduce the amount of overpressure needed.
History
In the first decade of the twentieth century, semi-rigid airships were considered more suitable for military use because, unlike rigid airships, they could be deflated, stored and transported by land or by sea. Non-rigid airships had a limited lifting capacity due to the strength limitations of the envelope and rigging materials then in use.
An early successful example is the Groß-Basenach design made by Major Hans Groß from the Luftschiffer-Bataillon Nr. 1 in Berlin, the experimental first ship flying in 1907. It had a rigid keel under the envelope. Four more military airships of this design were built, and often rebuilt, designated M I to M IV, up to 1914.
The most advanced construction of semi-rigid airships between the two World Wars took place in Italy. There, the state-factory Stabilimento di Costruzioni Aeronautiche (SCA) constructed several. Umberto Nobile, later General and director, was its most well-known member, and he designed and flew several semi-rigid airships, including the Norge and Italia, for his overflights of the North Pole, and the W6 OSOAVIAKhIM, for the Soviet Union's airship program.
List of other semi-rigid airships
Pre-War and WWI
Bartolomeu de Gusmão from Augusto Severo de Albuquerque Maranhão in Brazil in 1894, destroyed in March 1894 by a gust of wind
Pax from Augusto Severo de Albuquerque Maranhão in France in 1902, caught fire at its first ascent, killing the pilot
Le Jaune - Built by Lebaudy Frères in France, first flight: 1902-11-13. Lebaudy built many other semi-rigid airships, among them the Patrie and the République.
British Army Dirigible No 1, often called the Nulli Secundus.
Forlanini F.1 Leonardo da Vinci, Italy, 3265 m3, 40 PS, first ascent: 1909; 1910-02-01 damaged beyond repair
The Groß-Basenach-type airship (5 built for the Prussian army)
The Luftschiff von Veeh (also Veeh 1 or Stahlluftschiff) built by Albert Paul Veeh from Apolda in Düsseldorf in the 1910s
Siemens-Schuckert I (1911),
M.1, Italian, first flight 1912, 83-metre long, 17-metre diameter, 2× 250 PS Fiat SA.76-4 engines each with one airscrew, payload: 3800 kg, first with the Army then the Navy, 164 flights, decommissioned 1924
M.2, Città di Ferrara, Italian, first flight 1913, hull identical to the M.1, 83-metre long, 17-metre diameter, 4×125 PS driving two airscrews, payload 3000 kg, speed: 85 km/h, a Navy airship, stationed in Jesi, on 1915-06-08 shot down by an Austrian flying boat
Forlanini F.2 Città di Milano, Italy, 11,500 m3, 2×85 PS, first flight: 1913-04-09, destroyed 1914-04-09 at Como
SR.1 (M-class) built by Italy for England 1918, 12,500 m3, 83 m long, 17 m Diameter, 9-man crew, internal keel of triangular steel components
1920s and 1930s
Among the Parseval airships designed by August von Parseval in the 1900s-1930s:
PL 26 and PL 27
Parseval-Naatz designs
Zodiac V10 was built 1930 for the French Navy
O-1 (airship) built by SCDA, Italy, and the only true semi-rigid airship to serve with United States Navy.
RS-1 was the only American-built semi-rigid military airship (flown by the United States Army) Manufacturer: Goodyear, maiden flight: 1926.
Raab-Katzenstein 27 - maiden flight: 1929-05-04
Nobile's company designed or built the following airships:
T 34 Roma, 33,810 m3, sold to US Army in 1921 and destroyed in 1922 after rudder malfunction caused collision with high tension wires
N 1 Norge, 19,000 m3, reached the North Pole in 1926
N 2 a 7000 m3-airship built in hangars at Augusta, Sicily
N 3 Sold to Japan as naval Airship No. 6, first flight on 1927-04-06. It was lost in 1927 after encountering a typhoon in the Pacific. There were no fatalities
N 4 Italia Flew to Svalbard for Arctic expedition 1928, crashed after third polar flight on return from North Pole
N 5 was a project for a 55,000 cubic metre keel airship, many times interrupted, eventually abandoned 1928
Nobile-designed airships of the Russian airship program, such as the Soviet SSSR-V6 OSOAVIAKhIM (1934–1938)
The Fujikura company built the No. 8 semi-rigid airship for the Japanese Navy to replace the Nobile N 3, basing the design on the latter airship. The airship set a record for an endurance flight of 60 hours and 1 minute on 17 July 1931, a record later broken by the Soviet OSOAVIAKhIM.
Current developments
, the only manned semi-rigid model of airship in active operation is the Zeppelin NT. It comprises a single gas cell kept at a slight over-pressure, ballonets to maintain constant volume, and a triangular keel structure internal to the cell. Three of these will be American-based airships.
CL160 "Cargolifter" was an unrealised design of the now liquidated German Cargolifter AG (1996–2003). Cargolifter Joey was a small semi-rigid experimental airship produced to test the design.
| Technology | Types of aircraft | null |
1717346 | https://en.wikipedia.org/wiki/Retrosynthetic%20analysis | Retrosynthetic analysis | Retrosynthetic analysis is a technique for solving problems in the planning of organic syntheses. This is achieved by transforming a target molecule into simpler precursor structures regardless of any potential reactivity/interaction with reagents. Each precursor material is examined using the same method. This procedure is repeated until simple or commercially available structures are reached. These simpler/commercially available compounds can be used to form a synthesis of the target molecule. Retrosynthetic analysis was used as early as 1917 in Robinson's Tropinone total synthesis. Important conceptual work on retrosynthetic analysis was published by George Vladutz in 1963.
E.J. Corey formalized and popularized the concept from 1967 onwards in his article General methods for the construction of complex molecules and his book The Logic of Chemical Synthesis.
The power of retrosynthetic analysis becomes evident in the design of a synthesis. The goal of retrosynthetic analysis is a structural simplification. Often, a synthesis will have more than one possible synthetic route. Retrosynthesis is well suited for discovering different synthetic routes and comparing them in a logical and straightforward fashion. A database may be consulted at each stage of the analysis, to determine whether a component already exists in the literature. In that case, no further exploration of that compound would be required. If that compound exists, it can be a jumping point for further steps developed to reach a synthesis.
Definitions
Disconnection A retrosynthetic step involving the breaking of a bond to form two (or more) synthons.
Retron A minimal molecular substructure that enables certain transformations.
Retrosynthetic tree A directed acyclic graph of several (or all) possible retrosyntheses of a single target.
Synthon A fragment of a compound that assists in the formation of a synthesis, derived from that target molecule. A synthon and the corresponding commercially available synthetic equivalent are shown below:
Target The desired final compound.
Transform The reverse of a synthetic reaction; the formation of starting materials from a single product.
Example
Shown below is a retrosynthetic analysis of phenylacetic acid:
In planning the synthesis, two synthons are identified. A nucleophilic "-COOH" group, and an electrophilic "PhCH2+" group. Both synthons do not exist as written; synthetic equivalents corresponding to the synthons are reacted to produce the desired product. In this case, the cyanide anion is the synthetic equivalent for the −COOH synthon, while benzyl bromide is the synthetic equivalent for the benzyl synthon.
The synthesis of phenylacetic acid determined by retrosynthetic analysis is thus:
PhCH2Br + NaCN → PhCH2CN + NaBr
PhCH2CN + 2 H2O → PhCH2COOH + NH3
In fact, phenylacetic acid has been synthesized from benzyl cyanide, itself prepared by the analogous reaction of benzyl bromide with sodium cyanide.
Strategies
Functional group strategies
Manipulation of functional groups can lead to significant reductions in molecular complexity.
Stereochemical strategies
Numerous chemical targets have distinct stereochemical demands. Stereochemical transformations (such as the Claisen rearrangement and Mitsunobu reaction) can remove or transfer the desired chirality thus simplifying the target.
Structure-goal strategies
Directing a synthesis toward a desirable intermediate can greatly narrow the focus of analysis. This allows bidirectional search techniques.
Transform-based strategies
The application of transformations to retrosynthetic analysis can lead to powerful reductions in molecular complexity. Unfortunately, powerful transform-based retrons are rarely present in complex molecules, and additional synthetic steps are often needed to establish their presence.
Topological strategies
The identification of one or more key bond disconnections may lead to the identification of key substructures or difficult to identify rearrangement transformations in order to identify the key structures.
Disconnections that preserve ring structures are encouraged.
Disconnections that create rings larger than 7 members are discouraged.
Disconnection involves creativity.
| Physical sciences | Synthetic strategies | Chemistry |
1717684 | https://en.wikipedia.org/wiki/Ribosomal%20RNA | Ribosomal RNA | Ribosomal ribonucleic acid (rRNA) is a type of non-coding RNA which is the primary component of ribosomes, essential to all cells. rRNA is a ribozyme which carries out protein synthesis in ribosomes. Ribosomal RNA is transcribed from ribosomal DNA (rDNA) and then bound to ribosomal proteins to form small and large ribosome subunits. rRNA is the physical and mechanical factor of the ribosome that forces transfer RNA (tRNA) and messenger RNA (mRNA) to process and translate the latter into proteins. Ribosomal RNA is the predominant form of RNA found in most cells; it makes up about 80% of cellular RNA despite never being translated into proteins itself. Ribosomes are composed of approximately 60% rRNA and 40% ribosomal proteins, though this ratio differs between prokaryotes and eukaryotes.
Structure
Although the primary structure of rRNA sequences can vary across organisms, base-pairing within these sequences commonly forms stem-loop configurations. The length and position of these rRNA stem-loops allow them to create three-dimensional rRNA structures that are similar across species. Because of these configurations, rRNA can form tight and specific interactions with ribosomal proteins to form ribosomal subunits. These ribosomal proteins contain basic residues (as opposed to acidic residues) and aromatic residues (i.e. phenylalanine, tyrosine and tryptophan) allowing them to form chemical interactions with their associated RNA regions, such as stacking interactions. Ribosomal proteins can also cross-link to the sugar-phosphate backbone of rRNA with binding sites that consist of basic residues (i.e. lysine and arginine). All ribosomal proteins (including the specific sequences that bind to rRNA) have been identified. These interactions along with the association of the small and large ribosomal subunits result in a functioning ribosome capable of synthesizing proteins.
Ribosomal RNA organizes into two types of major ribosomal subunit: the large subunit (LSU) and the small subunit (SSU). One of each type come together to form a functioning ribosome. The subunits are at times referred to by their size-sedimentation measurements (a number with an "S" suffix). In prokaryotes, the LSU and SSU are called the 50S and 30S subunits, respectively. In eukaryotes, they are a little larger; the LSU and SSU of eukaryotes are termed the 60S and 40S subunits, respectively.
In the ribosomes of prokaryotes such as bacteria, the SSU contains a single small rRNA molecule (~1500 nucleotides) while the LSU contains one single small rRNA and a single large rRNA molecule (~3000 nucleotides). These are combined with ~50 ribosomal proteins to form ribosomal subunits. There are three types of rRNA found in prokaryotic ribosomes: 23S and 5S rRNA in the LSU and 16S rRNA in the SSU.
In the ribosomes of eukaryotes such as humans, the SSU contains a single small rRNA (~1800 nucleotides) while the LSU contains two small rRNAs and one molecule of large rRNA (~5000 nucleotides). Eukaryotic rRNA has over 70 ribosomal proteins which interact to form larger and more polymorphic ribosomal units in comparison to prokaryotes. There are four types of rRNA in eukaryotes: 3 species in the LSU and 1 in the SSU. Yeast has been the traditional model for observation of eukaryotic rRNA behavior and processes, leading to a deficit in diversification of research. It has only been within the last decade that technical advances (specifically in the field of Cryo-EM) have allowed for preliminary investigation into ribosomal behavior in other eukaryotes. In yeast, the LSU contains the 5S, 5.8S and 28S rRNAs. The combined 5.8S and 28S are roughly equivalent in size and function to the prokaryotic 23S rRNA subtype, minus expansion segments (ESs) that are localized to the surface of the ribosome which were thought to occur only in eukaryotes. However recently, the Asgard phyla, namely, Lokiarchaeota and Heimdallarchaeota, considered the closest archaeal relatives to Eukarya, were reported to possess two supersized ESs in their 23S rRNAs. Likewise, the 5S rRNA contains a 108‐nucleotide insertion in the ribosomes of the halophilic archaeon Halococcus morrhuae.
A eukaryotic SSU contains the 18S rRNA subunit, which also contains ESs. SSU ESs are generally smaller than LSU ESs.
SSU and LSU rRNA sequences are widely used for study of evolutionary relationships among organisms, since they are of ancient origin, are found in all known forms of life and are resistant to horizontal gene transfer. rRNA sequences are conserved (unchanged) over time due to their crucial role in the function of the ribosome. Phylogenic information derived from the 16s rRNA is currently used as the main method of delineation between similar prokaryotic species by calculating nucleotide similarity. The canonical tree of life is the lineage of the translation system.
LSU rRNA subtypes have been called ribozymes because ribosomal proteins cannot bind to the catalytic site of the ribosome in this area (specifically the peptidyl transferase center, or PTC).
The SSU rRNA subtypes decode mRNA in its decoding center (DC). Ribosomal proteins cannot enter the DC.
The structure of rRNA is able to drastically change to affect tRNA binding to the ribosome during translation of other mRNAs. In 16S rRNA, this is thought to occur when certain nucleotides in the rRNA appear to alternate base pairing between one nucleotide or another, forming a "switch" that alters the rRNA's conformation. This process is able to affect the structure of the LSU and SSU, suggesting that this conformational switch in the rRNA structure affects the entire ribosome in its ability to match a codon with its anticodon in tRNA selection as well as decode mRNA.
Assembly
Ribosomal RNA's integration and assembly into ribosomes begins with their folding, modification, processing and assembly with ribosomal proteins to form the two ribosomal subunits, the LSU and the SSU. In Prokaryotes, rRNA incorporation occurs in the cytoplasm due to the lack of membrane-bound organelles. In Eukaryotes, however, this process primarily takes place in the nucleolus and is initiated by the synthesis of pre-RNA. This requires the presence of all three RNA polymerases. In fact, the transcription of pre-RNA by RNA polymerase I accounts for about 60% of cell's total cellular RNA transcription. This is followed by the folding of the pre-RNA so that it can be assembled with ribosomal proteins. This folding is catalyzed by endo- and exonucleases, RNA helicases, GTPases and ATPases. The rRNA subsequently undergoes endo- and exonucleolytic processing to remove external and internal transcribed spacers. The pre-RNA then undergoes modifications such as methylation or pseudouridinylation before ribosome assembly factors and ribosomal proteins assemble with the pre-RNA to form pre-ribosomal particles. Upon going under more maturation steps and subsequent exit from the nucleolus into the cytoplasm, these particles combine to form the ribosomes. The basic and aromatic residues found within the primary structure of rRNA allow for favorable stacking interactions and attraction to ribosomal proteins, creating a cross-linking effect between the backbone of rRNA and other components of the ribosomal unit. More detail on the initiation and beginning portion of these processes can be found in the "Biosynthesis" section.
Function
Universally conserved secondary structural elements in rRNA among different species show that these sequences are some of the oldest discovered. They serve critical roles in forming the catalytic sites of translation of mRNA. During translation of mRNA, rRNA functions to bind both mRNA and tRNA to facilitate the process of translating mRNA's codon sequence into amino acids. rRNA initiates the catalysis of protein synthesis when tRNA is sandwiched between the SSU and LSU. In the SSU, the mRNA interacts with the anticodons of the tRNA. In the LSU, the amino acid acceptor stem of the tRNA interacts with the LSU rRNA. The ribosome catalyzes ester-amide exchange, transferring the C-terminus of a nascent peptide from a tRNA to the amine of an amino acid. These processes are able to occur due to sites within the ribosome in which these molecules can bind, formed by the rRNA stem-loops. A ribosome has three of these binding sites called the A, P and E sites:
In general, the A (aminoacyl) site contains an aminoacyl-tRNA (a tRNA esterified to an amino acid on the 3' end).
The P (peptidyl) site contains a tRNA esterified to the nascent peptide. The free amino (NH2) group of the A site tRNA attacks the ester linkage of P site tRNA, causing transfer of the nascent peptide to the amino acid in the A site. This reaction is takes place in the peptidyl transferase center
The E (exit) site contains a tRNA that has been discharged, with a free 3' end (with no amino acid or nascent peptide).
A single mRNA can be translated simultaneously by multiple ribosomes. This is called a polysome.
In prokaryotes, much work has been done to further identify the importance of rRNA in translation of mRNA. For example, it has been found that the A site consists primarily of 16S rRNA. Apart from various protein elements that interact with tRNA at this site, it is hypothesized that if these proteins were removed without altering ribosomal structure, the site would continue to function normally. In the P site, through the observation of crystal structures it has been shown the 3' end of 16s rRNA can fold into the site as if a molecule of mRNA. This results in intermolecular interactions that stabilize the subunits. Similarly, like the A site, the P site primarily contains rRNA with few proteins. The peptidyl transferase center, for example, is formed by nucleotides from the 23S rRNA subunit. In fact, studies have shown that the peptidyl transferase center contains no proteins, and is entirely initiated by the presence of rRNA. Unlike the A and P sites, the E site contains more proteins. Because proteins are not essential for the functioning of the A and P sites, the E site molecular composition shows that it is perhaps evolved later. In primitive ribosomes, it is likely that tRNAs exited from the P site. Additionally, it has been shown that E-site tRNA bind with both the 16S and 23S rRNA subunits.
Subunits and associated ribosomal RNA
Both prokaryotic and eukaryotic ribosomes can be broken down into two subunits, one large and one small. The exemplary species used in the table below for their respective rRNAs are the bacterium Escherichia coli (prokaryote) and human (eukaryote). Note that "nt" represents the length of the rRNA type in nucleotides and the "S" (such as in "16S) represents Svedberg units.
S units of the subunits (or the rRNAs) cannot simply be added because they represent measures of sedimentation rate rather than of mass. The sedimentation rate of each subunit is affected by its shape, as well as by its mass. The nt units can be added as these represent the integer number of units in the linear rRNA polymers (for example, the total length of the human rRNA = 7216 nt).
Gene clusters coding for rRNA are commonly called "ribosomal DNA" or rDNA (note that the term seems to imply that ribosomes contain DNA, which is not the case).
In prokaryotes
In prokaryotes a small 30S ribosomal subunit contains the 16S ribosomal RNA. The large 50S ribosomal subunit contains two rRNA species (the 5S and 23S ribosomal RNAs). Therefore it can be deduced that in both bacteria and archaea there is one rRNA gene that codes for all three rRNA types :16S, 23S and 5S.
Bacterial 16S ribosomal RNA, 23S ribosomal RNA, and 5S rRNA genes are typically organized as a co-transcribed operon. As shown by the image in this section, there is an internal transcribed spacer between 16S and 23S rRNA genes. There may be one or more copies of the operon dispersed in the genome (for example, Escherichia coli has seven). Typically in bacteria there are between one and fifteen copies.
Archaea contains either a single rRNA gene operon or up to four copies of the same operon.
The 3' end of the 16S ribosomal RNA (in a ribosome) recognizes a sequence on the 5' end of mRNA called the Shine-Dalgarno sequence.
In eukaryotes
In contrast, eukaryotes generally have many copies of the rRNA genes organized in tandem repeats. In humans, approximately 300–400 repeats are present in five clusters, located on chromosomes 13 (RNR1), 14 (RNR2), 15 (RNR3), 21 (RNR4) and 22 (RNR5). Diploid humans have 10 clusters of genomic rDNA which in total make up less than 0.5% of the human genome.
It was previously accepted that repeat rDNA sequences were identical and served as redundancies or failsafes to account for natural replication errors and point mutations. However, sequence variation in rDNA (and subsequently rRNA) in humans across multiple chromosomes has been observed, both within and between human individuals. Many of these variations are palindromic sequences and potential errors due to replication. Certain variants are also expressed in a tissue-specific manner in mice.
Mammalian cells have 2 mitochondrial (12S and 16S) rRNA molecules and 4 types of cytoplasmic rRNA (the 28S, 5.8S, 18S, and 5S subunits). The 28S, 5.8S, and 18S rRNAs are encoded by a single transcription unit (45S) separated by 2 internally transcribed spacers. The first spacer corresponds to the one found in bacteria and archaea, and the other spacer is an insertion into what was the 23S rRNA in prokaryotes. The 45S rDNA is organized into 5 clusters (each has 30–40 repeats) on chromosomes 13, 14, 15, 21, and 22. These are transcribed by RNA polymerase I. The DNA for the 5S subunit occurs in tandem arrays (~200–300 true 5S genes and many dispersed pseudogenes), the largest one on the chromosome 1q41-42. 5S rRNA is transcribed by RNA polymerase III. The 18S rRNA in most eukaryotes is in the small ribosomal subunit, and the large subunit contains three rRNA species (the 5S, 5.8S and 28S in mammals, 25S in plants, rRNAs).
In flies, the large subunit contains four rRNA species instead of three with a split in the 5.8S rRNA that presents a shorter 5.8S subunit (123 nt) and a 30 nucleotide subunit named the 2S rRNA. Both fragments are separated by an internally transcribed spacer of 28 nucleotides. Since the 2S rRNA is small and highly abundant, its presence can interfere with construction of sRNA libraries and compromise the quantification of other sRNAs. The 2S subunit is retrieved in fruit fly and dark-winged fungus gnat species but absent from mosquitoes.
The tertiary structure of the small subunit ribosomal RNA (SSU rRNA) has been resolved by X-ray crystallography. The secondary structure of SSU rRNA contains 4 distinct domains—the 5', central, 3' major and 3' minor domains. A model of the secondary structure for the 5' domain (500-800 nucleotides) is shown.
Biosynthesis
In eukaryotes
As the building-blocks for the organelle, production of rRNA is ultimately the rate-limiting step in the synthesis of a ribosome. In the nucleolus, rRNA is synthesized by RNA polymerase I using the specialty genes (rDNA) that encode for it, which are found repeatedly throughout the genome. The genes coding for 18S, 28S and 5.8S rRNA are located in the nucleolus organizer region and are transcribed into large precursor rRNA (pre-rRNA) molecules by RNA polymerase I. These pre-rRNA molecules are separated by external and internal spacer sequences and then methylated, which is key for later assembly and folding. After separation and release as individual molecules, assembly proteins bind to each naked rRNA strand and fold it into its functional form using cooperative assembly and progressive addition of more folding proteins as needed. The exact details of how the folding proteins bind to the rRNA and how correct folding is achieved remains unknown. The rRNA complexes are then further processed by reactions involving exo- and endo-nucleolytic cleavages guided by snoRNA (small nucleolar RNAs) in complex with proteins. As these complexes are compacted together to form a cohesive unit, interactions between rRNA and surrounding ribosomal proteins are constantly remodeled throughout assembly in order to provide stability and protect binding sites. This process is referred to as the "maturation" phase of the rRNA lifecycle. The modifications that occur during maturation of rRNA have been found to contribute directly to control of gene expression by providing physical regulation of translational access of tRNA and mRNA. Some studies have found that extensive methylation of various rRNA types is also necessary during this time to maintain ribosome stability.
The genes for 5S rRNA are located inside the nucleolus and are transcribed into pre-5S rRNA by RNA polymerase III. The pre-5S rRNA enters the nucleolus for processing and assembly with 28S and 5.8S rRNA to form the LSU. 18S rRNA forms the SSUs by combining with numerous ribosomal proteins. Once both subunits are assembled, they are individually exported into the cytoplasm to form the 80S unit and begin initiation of translation of mRNA.
Ribosomal RNA is non-coding and is never translated into proteins of any kind: rRNA is only transcribed from rDNA and then matured for use as a structural building block for ribosomes. Transcribed rRNA is bound to ribosomal proteins to form the subunits of ribosomes and acts as the physical structure that pushes mRNA and tRNA through the ribosome to process and translate them.
Eukaryotic regulation
Synthesis of rRNA is up-regulated and down-regulated to maintain homeostasis by a variety of processes and interactions:
The kinase AKT indirectly promotes synthesis of rRNA as RNA polymerase I is AKT-dependent.
Certain angiogenic ribonucleases, such as angiogenin (ANG), can translocate and accumulate in the nucleolus. When the concentration of ANG becomes too high, some studies have found that ANG can bind to the promoter region of rDNA and unnecessarily increase rRNA transcription. This can be damaging to the nucleolus and can even lead to unchecked transcription and cancer.
During times of cellular glucose restriction, AMP-activated protein kinase (AMPK) discourages metabolic processes that consume energy but are non-essential. As a result, it is capable of phosphorylating RNA polymerase I (at the Ser-635 site) in order to down-regulate rRNA synthesis by disrupting transcription initiation.
Impairment or removal of more than one pseudouridine or 29-O-methylation regions from the ribosome decoding center significantly reduces rate of rRNA transcription by reducing the rate of incorporation of new amino acids.
Formation of heterochromatin is essential to silencing rRNA transcription, without which ribosomal RNA is synthesized unchecked and greatly decreases the lifespan of the organism.
In prokaryotes
Similar to eukaryotes, the production of rRNA is the rate-limiting step in the prokaryotic synthesis of a ribosome. In E. coli, it has been found that rRNA is transcribed from the two promoters P1 and P2 found within seven different rrn operons. The P1 promoter is specifically responsible for regulating rRNA synthesis during moderate to high bacterial growth rates. Because the transcriptional activity of this promoter is directly proportional to the growth rate, it is primarily responsible for rRNA regulation. An increased rRNA concentration serves as a negative feedback mechanism to ribosome synthesis. High NTP concentration has been found to be required for efficient transcription of the rrn P1 promoters. They are thought to form stabilizing complexes with RNA polymerase and the promoters. In bacteria specifically, this association of high NTP concentration with increased rRNA synthesis provides a molecular explanation as to why ribosomal and thus protein synthesis is dependent on growth-rate. A low growth-rate yields lower rRNA / ribosomal synthesis rates while a higher growth rate yields a higher rRNA / ribosomal synthesis rate. This allows a cell to save energy or increase its metabolic activity dependent on its needs and available resources.
In prokaryotic cells, each rRNA gene or operon is transcribed into a single RNA precursor that includes 16S, 23S, 5S rRNA and tRNA sequences along with transcribed spacers. The RNA processing then begins before the transcription is complete. During processing reactions, the rRNAs and tRNAs are released as separate molecules.
Prokaryotic regulation
Because of the vital role rRNA plays in the cell physiology of prokaryotes, there is much overlap in rRNA regulation mechanisms. At the transcriptional level, there are both positive and negative effectors of rRNA transcription that facilitate a cell's maintenance of homeostasis:
An UP element upstream of the rrn P1 promoter can bind a subunit of RNA polymerase, thus promoting transcription of rRNA.
Transcription factors such as FIS bind upstream of the promoter and interact with RNA polymerase which facilitates transcription.
Anti-termination factors bind downstream of the rrn P2 promoter, preventing premature transcription termination.
Due to the stringent response, when the availability of amino acids is low, ppGpp (a negative effector) can inhibit transcription from both the P1 and P2 promoters.
Degradation
Ribosomal RNA is quite stable in comparison to other common types of RNA and persists for longer periods of time in a healthy cellular environment. Once assembled into functional units, ribosomal RNA within ribosomes are stable in the stationary phase of the cell life cycle for many hours. Degradation can be triggered via "stalling" of a ribosome, a state that occurs when the ribosome recognizes faulty mRNA or encounters other processing difficulties that causes translation by the ribosome to cease. Once a ribosome stalls, a specialized pathway on the ribosome is initiated to target the entire complex for disassembly.
In eukaryotes
As with any protein or RNA, rRNA production is prone to errors resulting in the production of non-functional rRNA. To correct this, the cell allows for degradation of rRNA through the non-functional rRNA decay (NRD) pathway. Much of the research in this topic was conducted on eukaryotic cells, specifically Saccharomyces cerevisiae yeast. Currently, only a basic understanding of how cells are able to target functionally defective ribosomes for ubiquination and degradation in eukaryotes is available.
The NRD pathway for the 40S subunit may be independent or separate from the NRD pathway for the 60S subunit. It has been observed that certain genes were able to affect degradation of certain pre-RNAs, but not others.
Numerous proteins are involved in the NRD pathway, such as Mms1p and Rtt101p, which are believed to complex together to target ribosomes for degradation. Mms1p and Rtt101p are found to bind together and Rtt101p is believed to recruit a ubiquitin E3 ligase complex, allowing for the non-functional ribosomes to be ubiquinated before being degraded.
Prokaryotes lack a homolog for Mms1, so it is unclear how prokaryotes are able to degrade non-functional rRNAs.
The growth rate of eukaryotic cells did not seem to be significantly affected by the accumulation of non-functional rRNAs.
In prokaryotes
Although there is far less research available on ribosomal RNA degradation in prokaryotes in comparison to eukaryotes, there has still been interest on whether bacteria follow a similar degradation scheme in comparison to the NRD in eukaryotes. Much of the research done for prokaryotes has been conducted on Escherichia coli. Many differences were found between eukaryotic and prokaryotic rRNA degradation, leading researchers to believe that the two degrade using different pathways.
Certain mutations in rRNA that were able to trigger rRNA degradation in eukaryotes were unable to do so in prokaryotes.
Point mutations in a 23S rRNA would cause both 23S and 16S rRNAs to be degraded, in comparison to eukaryotes, in which mutations in one subunit would only cause that subunit to be degraded.
Researchers found that removal of a whole helix structure (H69) from the 23S rRNA did not trigger its degradation. This led them to believe that H69 was critical for endonucleases to recognize and degrade the mutated rRNA.
Sequence conservation and stability
Due to the prevalent and unwavering nature of rRNA across all organisms, the study of its resistance to gene transfer, mutation, and alteration without destruction of the organism has become a popular field of interest. Ribosomal RNA genes have been found to be tolerant to modification and incursion. When rRNA sequencing is altered, cells have been found to become compromised and quickly cease normal function. These key traits of rRNA have become especially important for gene database projects (comprehensive online resources such as SILVA or SINA) where alignment of ribosomal RNA sequences from across the different biologic domains greatly eases "taxonomic assignment, phylogenetic analysis and the investigation of microbial diversity."
Examples of resilience:
Addition of large, nonsensical RNA fragments into many parts of the 16S rRNA unit does not observably alter the function of the ribosomal unit as a whole.
Non-coding RNARD7 has the capability to alter processing of rRNA to make the molecules resistant to degradation by carboxylic acid. This is a crucial mechanism in maintaining rRNA concentrations during active growth when acid build-up (due to the substrate phosphorylation required to produce ATP) can become toxic to intracellular functions.
Insertion of hammerhead ribozymes that are capable of cis-cleavages along 16S rRNA greatly inhibit function and diminish stability.
While most cellular functions degrade heavily after only short period of exposure to hypoxic environments, rRNA remains un-degraded and resolved after six days of prolonged hypoxia. Only after such an extended period of time do rRNA intermediates (indicative of degradation finally occurring) begin to present themselves.
Significance
Ribosomal RNA characteristics are important in evolution, thus taxonomy and medicine.
rRNA is one of only a few gene products present in all cells. For this reason, genes that encode the rRNA (rDNA) are sequenced to identify an organism's taxonomic group, calculate related groups, and estimate rates of species divergence. As a result, many thousands of rRNA sequences are known and stored in specialized databases such as RDP-II and SILVA.
Alterations to rRNA are what allow certain disease-causing bacteria, such as Mycobacterium tuberculosis (the bacterium that causes tuberculosis) to develop extreme drug resistance. Due to similar issues, this has become a prevalent problem in veterinary medicine where the main method for handling bacterial infection in pets is administration of drugs that attack the peptidyl-transferase centre (PTC) of the bacterial ribosome. Mutations in 23S rRNA have created perfect resistance to these drugs as they operate together in an unknown fashion to bypass the PTC entirely.
rRNA is the target of numerous clinically relevant antibiotics: chloramphenicol, erythromycin, kasugamycin, micrococcin, paromomycin, linezolid, alpha-sarcin, spectinomycin, streptomycin, and thiostrepton.
rRNA have been shown to be the origin of species-specific microRNAs, like miR-663 in humans and miR-712 in mice. These particular miRNAs originate from the internal transcribed spacers of the rRNA.
Human genes
45S: RNR1, RNR2, RNR3, RNR4, RNR5; (unclustered) RNA18SN1, RNA18SN2, RNA18SN3, RNA18SN4, RNA18SN5, RNA28SN1, RNA28SN2, RNA28SN3, RNA28SN4, RNA28SN5, RNA45SN1, RNA45SN2, RNA45SN3, RNA45SN4, RNA45SN5, RNA5-8SN1, RNA5-8SN2, RNA5-8SN3, RNA5-8SN4, RNA5-8SN5
5S: RNA5S1, RNA5S2, RNA5S3, RNA5S4, RNA5S5, RNA5S6, RNA5S7, RNA5S8, RNA5S9, RNA5S10, RNA5S11, RNA5S12, RNA5S13, RNA5S14, RNA5S15, RNA5S16, RNA5S17
Mt: MT-RNR1, MT-TV (co-opted), MT-RNR2
| Biology and health sciences | Nucleic acids | Biology |
1718041 | https://en.wikipedia.org/wiki/Western%20lowland%20gorilla | Western lowland gorilla | The western lowland gorilla (Gorilla gorilla gorilla) is one of two Critically Endangered subspecies of the western gorilla (Gorilla gorilla) that lives in montane, primary and secondary forest and lowland swampland in central Africa in Angola (Cabinda Province), Cameroon, Central African Republic, Republic of the Congo, Democratic Republic of the Congo, Equatorial Guinea and Gabon. It is the nominate subspecies of the western gorilla, and the smallest of the four gorilla subspecies.
The western lowland gorilla is the only subspecies kept in zoos with the exception of Amahoro, a female eastern lowland gorilla at Antwerp Zoo, and a few mountain gorillas kept captive in the Democratic Republic of the Congo.
Description
The western lowland gorilla is the smallest subspecies of gorilla but still has exceptional size and strength. This species of gorillas exhibits pronounced sexual dimorphism. They possess no tails and have jet black skin along with coarse black hair that covers their entire body except for the face, ears, hands and feet. The hair on the back and rump of males takes on a grey coloration and is also lost as they get older. This coloration is the reason why older males are known as "silverbacks". Their hands are proportionately large with nails on all digits (similar to those of humans) and very large thumbs. They have short muzzles, prominent brow ridges, large nostrils and small eyes and ears. Other features are large muscles in the jaw region along with broad and strong teeth. Among these teeth are strong sets of frontal canines and large molars in the back of the mouth for grinding fruits and vegetables.
A male standing erect can be up to tall and weigh up to . Males have an average weight of , females of . Males in captivity, however, are noted to be capable of reaching weights up to . Males stand upright at , females at . Zoo owner John Aspinall claimed a silverback gorilla in his prime has the physical strength of seven or eight Olympic weightlifters, but this claim is unverified. Western gorillas frequently stand upright, but walk in a hunched, quadrupedal fashion, with hands curled and knuckles touching the ground; as a result, their arm span is greater than their standing height.
Albinism
The only known albino gorilla – named Snowflake – was a wild-born western lowland gorilla originally from Equatorial Guinea. Snowflake, a male gorilla, was taken from the wild and brought to the Barcelona Zoo in 1966 at a very young age. He presented the typical traits and characteristics of albinism seen in humans, including white hair, pinkish skin, light colored eyes, reduced visual perception and photophobia, and was diagnosed with non-syndromic albinism. The genetic variant for Snowflake’s albinism was identified by the scientists as a non-synonymous single nucleotide polymorphism located in a transmembrane region of SLC45A2. This transporter is also known to be involved in oculocutaneous albinism type 4 in humans. As it is a recessive allele, and his parents were an uncle and niece who were both carriers, this revealed the first evidence of inbreeding in western lowland gorillas.
Behavior
Social structure
Western lowland gorilla groups travel within a home range typically between in area. Gorillas do not display territorial behavior, and neighboring groups often overlap ranges. The group usually favours a certain area within the home range, but seems to follow a seasonal pattern depending upon the availability of ripening fruits and, at some sites, localised large open clearings (swamps and "bais"). Gorillas normally travel per day. Populations feeding on high-energy foods that vary spatially and seasonally tend to have greater day ranges than those feeding on lower-quality but more consistently available foods. Larger groups travel greater distances in order to obtain sufficient food.
It is easier for males to travel alone and move between groups, as before reaching the age of sexual maturity, males leave their natal group and go through a “bachelor stage” that can last several years either in solitary or in a nonbreeding group. However, while both sexes leave their birth group, females are always part of a breeding group. Males like to settle with other male members of their family. Their breeding groups consist of one silverback male, three adult females and their offspring. The male gorilla has the role of the protector. Females tend to make bonds with other females in their natal group only, but form strong bonds with males. Males also aggressively compete for contact with females.
The group of gorillas is led by one or more adult males. In cases where there is more than one silverback male in a group, they are most likely father and son. Groups containing only one male are believed to be the basic unit of the social group, gradually growing in size due to reproduction and new members migrating in. In the study done at Lope, gorillas harvest most of their food arboreally, but less than half of their night nests are built in trees. They are often found on the ground, and the group has up to 30 gorillas. Western lowland gorillas live in the smallest family groups of all gorillas, with an average of four to eight members in each. The leader (the silverback) organizes group activities, like eating, nesting and travelling in their home range. Those who challenge this alpha male are apt to be cowed by impressive shows of physical power. He may stand upright, throw things, make aggressive charges, and pound his huge chest with open or cupped hands while barking out powerful hoots or unleashing a frightening roar. Despite these displays and the animals' obvious physical power, gorillas are generally calm and nonaggressive unless they are disturbed. Young gorillas, from three to six years old, remind human observers of children. Much of their day is spent in play, climbing trees, chasing one another and swinging from branches.
Reproduction
Female western lowland gorillas do not produce many offspring due to the fact that they do not reach sexual maturity until the age of 8 or 9. Female gorillas give birth to one infant after a gestation period of nearly nine months. Female gorillas do not show signs of pregnancy. Unlike their powerful parents, newborns are tiny, weighing , but are able to cling to their mothers' fur. These infants ride on their mothers' backs from the age of four months through the first two or three years of their lives. Infants can be dependent on their mother for up to five years.
A study of over 300 births to captive female gorillas revealed that older females tend to give birth to more male offspring as opposed to females under 8 years old. This pattern is likely to result from selective pressures on females to have males at a time when they can provision them most effectively, as male reproductive success probably varies more than that of females and depends more on the maternal role.
Female western lowland gorillas living in a group led by a single male have been observed to display sexual behavior during all stages of their reproductive cycle and during times of non-fertility. Three out of four females have been observed to engage in sexual behavior while pregnant, and two out of three females have been observed to engage in sexual behavior while lactating. Females are significantly more likely to engage and participate in sexual behavior and activity on a day when another female is sexually active. It has been found that female western lowland gorillas participate in non-reproductive sexual behavior in order to increase their reproductive success through sexual competition. By increasing the female’s own reproductive success, she then decreases the reproductive success of other female gorillas, regardless of their reproductive state.
Infanticide by adult male gorillas has occasionally been observed in this subspecies. Victims are never related to the killer. A male does this in order to have the opportunity to mate with the mother, who otherwise would be unavailable while caring for her young offspring.
Intelligence
Use of tools
Their intelligence is displayed through their ability to fashion natural materials into tools that help them gather food more conveniently. While the use and manufacture of tools to extract ants and termites is a well-documented behavior in wild chimpanzees, it has never been observed in other great apes in their natural habitat and never seen in other primates in captivity.
Western lowland gorillas can adapt tools to a particular use by selecting branches, removing projections such as leaves and bark, and adapting their length to the depth of the holes. It appears that they also plan the use of the tool, since they begin with one of the biggest sticks available and progressively modify it until it is the perfect fit for inserting into a hole that contains food. This demonstrates the gorillas' acquisition of high level sensorimotor intelligence, similar to that of young human children.
A gorilla has been observed to use a stick to measure the depth of water. In 2009, a western lowland gorilla at Buffalo Zoological Gardens used a bucket to collect water. In an experiment, one adult male gorilla and three adult female gorillas were given five-gallon buckets near a standing pool. Two of the younger females were able to fill the buckets with water. This is the first record of gorillas spontaneously using tools to drink in zoos.
Communication
Another example of gorillas' significant intelligence is their ability to comprehend simple sign language. In the mid-1970s, researchers turned their attention to communicating with gorillas via sign language. One gorilla, Koko, was born in San Francisco Zoo on July 4, 1971. Francine Patterson officially started working with Koko on July 12, 1972, with the goal of teaching her sign language. In the beginning, Patterson focused on teaching Koko only three basic signs: "food", "drink", and "more". Koko would learn signs through observation and from Patterson or one of her colleagues molding Koko's hands into the correct sign. On August 7, Patterson began a more formal routine of teaching Koko those three signs. In the couple of weeks before that, Koko had been using gestures that seemed like attempts at the signs taught, but were deemed as coincidental and random and not intended for the actual purpose. Only two days after they started the more formal routine, Koko started responding consistently with the sign "food" when prompted to do so. Within the first three months, Koko made 16 different combinations of signs and was also starting to form simple questions by using eye contact and different positioning of signs by the body. Koko mastered more than 1,000 signs and was said to be able to connect up to eight words together to form a statement expressing wants, needs, thoughts, or simple responses.
There has been a study examining the ability of western lowland gorillas to give to and exchange with humans. This involved humans holding objects such as fruit, leaves or peanuts in one hand. Once the gorillas had given twigs to the humans, they would receive one of these objects. If the gorillas did not give them a twig, they would not get their desired object. The gorillas were shown to quickly learn about receiving rewards, as mistakes made by the gorillas at the beginning of the experiments gradually decreased.
Ecology
Habitat
Western lowland gorillas primarily live in rainforest, swamp forest, brush, secondary vegetation, clearing and forest edges, abandoned farming fields and riverine forest. They live in primary and secondary lowland tropical forest at elevations that extend from sea level up to . The average amount of rainfall in the areas where western lowland gorillas typically reside is about a year, with the greatest rainfall between the months of August and November. Western lowland gorillas are not typically observed in areas that are close to human settlements and villages. They have been known to avoid areas with roads and farms that show signs of human activity. These gorillas favor areas where edible plants are more copious. Swamp forest is now considered an important food source and habitat for the western lowland gorilla. These areas support the gorillas in both wet and dry seasons. The forest of the Republic of the Congo is currently considered to host the majority of the western lowland gorilla population. The isolation of the large swampy forest areas protects the gorillas.
Diet
The western lowland gorilla is primarily herbivorous, and its main diet consists of roots, shoots, fruit, wild celery, tree bark and pulp, which are found in the thick forest of Central and West Africa. During the wet season, gorillas commonly consume fruits. In the dry season, they eat less fleshy fruits, but they continue to eat other kinds of fruits. The diversity of fruits consumed was higher in a poor fruit year, when favored fruit species failed to produce large crops. They may also eat insects from time to time. The common food item which provides fibers is herbaceous stems.
Important food species have been divided into three categories: staple foods which are eaten on a daily/weekly basis throughout the year; seasonal foods which are present in the majority of resources when available; and fallback foods which are always available, but eaten only or mainly during fruit-scarce months. The adult will eat around of food per day. Gorillas will climb trees up to 15 meters in height in search of food. They never completely strip vegetation from a single area, since the rapid regrowth of the vegetation allows them to stay within a reasonably confined home range for extended periods of time.
They eat a combination of fruits and foliage, providing a balance of nutrients, depending on the time of year. However, when ripe fruit is available, they tend to eat more fruit as opposed to foliage. When ripe fruit is in scarce supply, they eat leaves, herbs and bark. During the rainy months of July and August, fruit is ripe; however, in the dry seasons, ripe fruit is scarce. Gorillas choose fruit that is high in sugar for energy, as well as fiber.
Relationship with humans
The presence of western lowland gorillas has allowed the study of how gorillas compare with humans in regard to human diseases, behavior, and linguistic and psychological aspects of their lives. They are hunted illegally for their skins and meat in Africa and captured to be sold to zoos. While defended as economically profitable for restaurants and local people, such hunting contributes greatly to the endangered status of the western lowland gorilla. They are also seen as a crop pest in western Africa, because they raid plantations, and so destroy valuable crops.
Threats
Hunting and logging
In tropical forest, gorillas are hunted to provide meat for the bushmeat trade. Logging also destroys gorilla habitats, although it may also provide increased herbaceous vegetation as a result of gaps in the tree cover. Destruction of gorilla habitat may harm the overall forest ecosystem. Western lowland gorillas are seed dispersers, which is beneficial to many of the animals in the forest, so their extinction could affect many other animals, which could over time destroy their current ecosystem.
Population decline and recovery
The western lowland gorilla population in the wild faces a number of threats to its survival. These include deforestation, farming, grazing and the expanding human settlements that cause forest loss. There is a correlation between human intervention in the wild with the destruction of habitats and an increase in bushmeat hunting. Another threat is infertility. Generally, female gorillas mature at 10–12 years of age (or earlier at 7–8 years). Males mature more slowly: they are rarely strong and dominant enough to reproduce before 15–20 years of age. The fecundity of females (their capacity to produce young in great numbers) appears to decline by the age of 18. Of one half of captive females of viable reproductive age, approximately 30% of those had only a single birth. However, those gorillas that do not reproduce may prove to be a valuable resource, since the use of assisted reproductive techniques helps to maintain genetic diversity in the limited populations in zoos.
Conservation
In the 1980s, a census of the gorilla populations in equatorial Africa was thought to be 100,000. Researchers later adjusted the figure to less than half because of poaching and diseases. Surveys conducted by the Wildlife Conservation Society in 2006 and 2007 found about 125,000 previously unreported gorillas have been living in the swamp forests of Lake Télé Community Reserve and in neighboring Marantaceae (dryland) forests in the Republic of the Congo. However, gorillas remain vulnerable to Ebola, deforestation and poaching.
In 2002 and 2003, there was an Ebola outbreak in the Lossi sanctuary population, and in 2004, there was an Ebola outbreak in the Lokoué forest clearing in Odzala-Kokoua National Park, both in the Republic of the Congo. The Ebola outbreak in the Lokoué forest clearing negatively affected the individuals living in groups and the adult females more than the solitary males, resulting in an increase in the proportion of solitary males to those living in groups. This population decreased from 377 individuals to 38 individuals two years after the outbreak and to 40 individuals six years after the outbreak. The population is still slowly recovering, even today, it is hoped, towards a population that has the same demographic structure as an unaffected population, because of new births and breeding groups. This Ebola outbreak also affected the Maya Nord population (52 kilometres northwest from Lokoué) from 400 individuals to considerably fewer. Because of these outbreaks, the International Union for Conservation of Nature (IUCN) updated the status of western lowland gorillas from "endangered" to "critically endangered".
In the northeastern part of the Republic of the Congo, western lowland gorillas are still being hunted for their bushmeat and the young for pets; five percent of the subspecies is killed each year because of this. Deforestation of this area allows for the trade of bushmeat and even more poaching. Commercial poaching of chimpanzees, forest elephants and western gorillas in the Republic of the Congo resulted from the increased amount of commercial logging and infrastructure. Deforestation and logging allowed for the creation of roads which allowed hunters to hunt deeper into the forest, increasing the amount of poaching and bushmeat trade in the area. The Republic of the Congo has put in place a conservation effort to conserve different species such as chimpanzees, forest elephants and western gorillas from poaching and deforestation. This conservation effort would allow these species to benefit from vegetation and ecologically important resources.
Bush meat hunting and timber harvesting in the western lowland gorilla’s habitat have negatively affected the probability of its survival. The western lowland gorilla is considered to be critically endangered by the IUCN. The western lowland gorillas, like many gorillas, are essential to the composition of the rainforest due to their seed distribution. The conservation of the western lowland gorilla has been made a priority by many organizations. The Wildlife Conservation Society (WCS) has been working with the local community in the Congo Basin to establish wildlife management programs. The WCS is also working in Congo and surrounding countries to limit the bush meat trade by enforcing laws and hunting restrictions and also helping the local people find new sources of protein.
Zoos worldwide have a population of 550 western lowland gorillas, and the Cincinnati Zoo leads the United States in western lowland gorilla births.
In captivity
Stress
Stress has been known to cause both physiological and behavioral chronic issues for captive species including, but not limited to, altered reproductive cycling and behavior, reduced immune responses, disrupted hormone and growth levels, reduced body weight, heightened abnormal activities and aggression and decreased exploratory behavior with increased hiding behaviors. Such stress reactions could be caused by sounds, light conditions, odors, temperature and humidity conditions, material makeup of enclosures, habitat size constraints, lack of proper hiding areas, forced closeness to humans, routine husbandry and feeding conditions, or abnormal social groups to name a few. Use of both internal and external privacy screens on exhibit windows has been shown to alleviate stresses from visual effects of high crowd densities, leading to decreased stereotypic behaviors in the gorillas. Playing naturalistic auditory stimuli as opposed to classical music, rock music, or no auditory enrichment (which allows for crowd noise, machinery, etc. to be heard) has been noted to reduce stress behavior as well. Enrichment modifications to feed and foraging, where clover-hay is added to an exhibit floor, decrease stereotypic activities while simultaneously increasing positive food-related behaviors.
Stereotypic behaviors
Stereotypic behaviors are abnormal or compulsive behaviors. It is common for non-human primates kept in captivity to exhibit behaviors deviating from the normal behavior observed of them in the wilderness. In captive gorillas, such frequent aberrant behaviors include eating disorders—such as regurgitation, reingestion and coprophagy—self-injurious or conspecific aggression, pacing, rocking, sucking of fingers or lip smacking, and overgrooming. Negative vigilance of visitor behaviors have been identified as starting, posturing and charging at visitors. Groups of bachelor gorillas containing young silverbacks have significantly higher levels of aggression and wounding rates than mixed age and sex groups.
A particularly abnormal behavior is hair-plucking, which occurs across many species of mammals and birds. Studies made on the topic show that of all the western lowland gorillas housed in the Association of Zoos and Aquariums (AZA) population, 15% of the surveyed population displayed hair-plucking behavior with 62% of all institutions housing a hair-plucker. Individual gorillas, particularly those of a more solitary nature, are more likely to self-pluck using their fingers and pick up this behavior if they were exposed to a group member that plucked their hair as a youngster and not yet mature gorilla.
Recent research on captive gorilla welfare emphasizes a need to shift to individual assessments instead of a one-size-fits-all group approach to understanding how welfare increases or decreases based on a variety of factors. Individual characteristics such as age, sex, personality and individual histories are essential in understanding that stressors will affect each individual gorilla and their welfare differently.
Genetics
The gorilla became the next-to-last great ape genus to have its genome sequenced. This was done in 2012. This has given scientists further insight into the evolution and origin of humans. Despite the chimpanzees being the closest extant relatives of humans, 15% of the human genome was found to be more like that of the gorilla. In addition, 30% of the gorilla genome "is closer to human or chimpanzee than the latter are to each other; this is rarer around coding genes, indicating pervasive selection throughout great ape evolution, and has functional consequences in gene expression". Analysis of the gorilla genome has cast doubt on the idea that the rapid evolution of hearing genes gave rise to language in humans, as it also occurred in gorillas.
Furthermore, in 2013, a study was conducted in order to better understand the genetic variation in gorillas by using reduced representation sequencing. This study consisted of a sample of 12 western lowland gorillas and two eastern lowland gorillas, all in captivity. The study found that western lowland gorillas are more likely to be heterozygous than homozygous. Most pure (meaning they are not inbred) western lowland gorillas have a hom/het ratio that ranges from 0.5 to 0.7. Therefore, because of variation in these gorillas, it has been concluded that they display a moderate substructure within the western lowland population in general.
Finally, the study sought out to analyze the allele frequency spectrum (AFS) in western lowland gorillas. The reason why is that AFS knowledge can help give information regarding demographics and evolutionary processes. The AFS has determined that western lowland gorillas display a deficit of rare alleles.
Disease
Western lowland gorillas are believed to be one of the zoonotic origins of HIV/AIDS. The SIV or Simian immunodeficiency virus that infects them is similar to a certain strain of HIV-1. The HIV-1 virus exhibits phylogeographic clustering, which is due to large rivers. This clustering allows pinpointing the probable geographic origins of two of the human virus clades. In the southern part of Cameroon, the populations of western lowland gorillas have had examinations of their feces. Out of 2,934 gorilla samples, 70 reacted with at least one HIV-1 antigen. These samples came from four field sites, all located in southern Cameroon.
The origin of AIDS has been linked to a virus known to infect more than 40 species of nonhuman primates in Africa. HIV-1, is composed of four phylogenetic lineages, which at some point in time have independently gone through cross-species transmission of the SIV (simian immunodeficiency virus). The simian immunodeficiency virus infected various African primates such as gorillas and chimpanzees.
Disease has also been a factor in the survival of the western lowland gorilla. The Ebola epizootic in western and central Africa has caused more than 90% mortality rate in western lowland gorillas. From 2003–2004, two epizootics infected the western lowland gorilla, which caused two-thirds of their population to disappear. The outbreak was monitored in the Republic of Congo by Magdalena Bermejo and other field-based primatologists, as it also spread to humans through contact with bushmeat. The catastrophe led the World Conservation Union to designate the western lowland gorilla a critically endangered species. Malaria is also an issue that has been arising for the western lowland gorillas. Out of 51 faecal samples from habituated individuals, 25 were shown to have Plasmodium DNA. Laverania, which is a subgenus of the parasitic protozoan genus Plasmodium, was found in these studies. Varying exposure to different Anopheles mosquitoes transmitting Plasmodium species is known to be the origin of malaria in western lowland gorillas.
Wild western lowland gorillas are known to consume the seeds of the "grains of paradise" plant, apparently conferring healthy cardiovascular conditions from their consumption—the occasionally poor cardiovascular health of lowland gorillas in zoos has been postulated to be due to the lack of availability of the Aframomum seeds in zoo gorillas' diets. Adult male gorillas are prone to fibrosing cardiomyopathy, a degenerative heart disease.
| Biology and health sciences | Apes | Animals |
1718317 | https://en.wikipedia.org/wiki/Lorenz%20gauge%20condition | Lorenz gauge condition | In electromagnetism, the Lorenz gauge condition or Lorenz gauge (after Ludvig Lorenz) is a partial gauge fixing of the electromagnetic vector potential by requiring The name is frequently confused with Hendrik Lorentz, who has given his name to many concepts in this field. (See, however, the Note added below for a different interpretation.) The condition is Lorentz invariant. The Lorenz gauge condition does not completely determine the gauge: one can still make a gauge transformation where is the four-gradient and is any harmonic scalar function: that is, a scalar function obeying the equation of a massless scalar field.
The Lorenz gauge condition is used to eliminate the redundant spin-0 component in Maxwell's equations when these are used to describe a massless spin-1 quantum field. It is also used for massive spin-1 fields where the concept of gauge transformations does not apply at all.
Description
In electromagnetism, the Lorenz condition is generally used in calculations of time-dependent electromagnetic fields through retarded potentials. The condition is
where is the four-potential, the comma denotes a partial differentiation and the repeated index indicates that the Einstein summation convention is being used. The condition has the advantage of being Lorentz invariant. It still leaves substantial gauge degrees of freedom.
In ordinary vector notation and SI units, the condition is
where is the magnetic vector potential and is the electric potential; see also gauge fixing.
In Gaussian units the condition is
A quick justification of the Lorenz gauge can be found using Maxwell's equations and the relation between the magnetic vector potential and the magnetic field:
Therefore,
Since the curl is zero, that means there is a scalar function such that
This gives a well known equation for the electric field:
This result can be plugged into the Ampère–Maxwell equation,
This leaves
To have Lorentz invariance, the time derivatives and spatial derivatives must be treated equally (i.e. of the same order). Therefore, it is convenient to choose the Lorenz gauge condition, which makes the left hand side zero and gives the result
A similar procedure with a focus on the electric scalar potential and making the same gauge choice will yield
These are simpler and more symmetric forms of the inhomogeneous Maxwell's equations.
Here
is the vacuum velocity of light, and is the d'Alembertian operator with the metric signature. These equations are not only valid under vacuum conditions, but also in polarized media, if and are source density and circulation density, respectively, of the electromagnetic induction fields and calculated as usual from and by the equations
The explicit solutions for and – unique, if all quantities vanish sufficiently fast at infinity – are known as retarded potentials.
History
When originally published in 1867, Lorenz's work was not received well by James Clerk Maxwell. Maxwell had eliminated the Coulomb electrostatic force from his derivation of the electromagnetic wave equation since he was working in what would nowadays be termed the Coulomb gauge. The Lorenz gauge hence contradicted Maxwell's original derivation of the EM wave equation by introducing a retardation effect to the Coulomb force and bringing it inside the EM wave equation alongside the time varying electric field, which was introduced in Lorenz's paper "On the identity of the vibrations of light with electrical currents". Lorenz's work was the first use of symmetry to simplify Maxwell's equations after Maxwell himself published his 1865 paper. In 1888, retarded potentials came into general use after Heinrich Rudolf Hertz's experiments on electromagnetic waves. In 1895, a further boost to the theory of retarded potentials came after J. J. Thomson's interpretation of data for electrons (after which investigation into electrical phenomena changed from time-dependent electric charge and electric current distributions over to moving point charges).
Note added on 26 November 2024: It should be pointed out that Lorenz actually derived the 'condition' from postulated integral expressions for the potentials (nowadays known as retarded potentials), whereas Lorentz (and before him Emil Wiechert) imposed it on the potentials to fix the gauge (see, e.g, his 1904 Encyclopedia article on electron theory). So Lorenz' equation is not a real condition but a mathematical result. It is therefore misleading to attribute the gauge condition to Lorenz.
| Physical sciences | Electrodynamics | Physics |
1718613 | https://en.wikipedia.org/wiki/Cross%20River%20gorilla | Cross River gorilla | The Cross River gorilla (Gorilla gorilla diehli) is a critically endangered subspecies of the western gorilla (Gorilla gorilla). It was named a new species in 1904 by Paul Matschie, a mammalian taxonomist working at the Humboldt University Zoological Museum in Berlin, but its populations were not systematically surveyed until 1987.
It is the most western and northern form of gorilla, and is restricted to the forested hills and mountains of the Cameroon-Nigeria border region at the headwaters of the Cross River. It is separated by about from the nearest population of western lowland gorillas (Gorilla gorilla gorilla), and by around from the gorilla population in the Ebo Forest of Cameroon. Estimates from 2014 suggest that fewer than 250 mature Cross River gorillas remain, making them the world's rarest great ape. Groups of these gorillas concentrate their activities in 11 localities across a range, though recent field surveys confirmed the presence of gorillas outside of their known localities suggesting a wider distribution within this range. This distribution is supported by genetic research, which has found evidence that many Cross River gorilla localities continue to maintain contact through the occasional dispersal of individuals. In 2009, the Cross River gorilla was finally captured on professional video on a forested mountain in Cameroon.
Description
The Cross River gorilla was first described as a new species of the western gorilla by Paul Matschie, a mammalian taxonomist, in 1904. Its morphological distinctiveness was confirmed in 1987. Subsequent analyses of cranial and tooth morphology, long bone proportions and distribution demonstrated the distinctiveness of the Cross River gorilla and it was described as a distinct subspecies in 2000.
When comparing the Cross River gorilla to western lowland gorillas, they have noticeably smaller palates, smaller cranial vaults, and shorter skulls. The Cross River gorilla is not known to differ much in terms of body size or limb and bone length from western lowland gorillas. However, measurements taken from a male suggest that they have shorter hands and feet and have a larger opposability index than western lowland gorillas.
According to Sarmiento and Oate's study published by the American Museum of Natural History, the Cross River gorilla has been described as having smaller dentitions, smaller palates, smaller cranial vaults, and shorter skulls than western lowland gorillas. The Royal Belgian Institute of Natural Sciences depicted the Cross River gorilla as the largest living primate with a barrel-chest, relatively even hair, a bare black face and chest, small ears, bare shaped brows that are joined, and nostril margins that are raised. They are clearly not the largest gorillas and the distinctiveness of their external characters still needs to be verified.
Other statistics include:
Average adult male height: .
Average adult male weight: .
Average adult female height: .
Average adult female weight: .
Evolution
In 2000 Esteban E. Sarmiento and John F. Oates proposed and supported the hypothesis that the Cross River gorilla began to evolve into a distinct subspecies of Gorilla gorilla during an arid period of the African Pleistocene phase in response to declining food sources and a greater emphasis on herbivory and terrestrial behaviors.
The team stated that ancestors to the Cross River gorilla may have been secluded to the forests near the Cross River headwaters and/or elsewhere in the Cameroon highlands. They wrote that the Cross River gorillas may not have spread much since their isolation. The Gorilla gorilla gorilla ancestors differentiated from the Cross River gorilla by spreading beyond this area somewhere to the south and/or east of the Sanaga. Sarmiento and Oates stated that there is no evidence to suggest that G. g. gorilla and G. g. diehli are sympatric.
Habitat
The Cross River gorilla, like many other gorilla subspecies, prefer a dense forest habitat that is uninhabited by humans. Due to the Cross River gorilla’s body size they require large and diverse areas of the forest to meet their habitat requirements. Similar to most endangered primates, their natural habitat exists where humans are often occupying and using for natural resources. Forests that are inhabited by the Cross River gorilla vary in altitude from approximately above sea level. Between 1996 and 1999, Field work was conducted on Afi Mountain in Cross River State, Nigeria for a period of 32 months. A great deal of data were collected, and things such as habitat types and topography mapped using line transects, climate, spatial and temporal availability of tree and herb foods and also the Cross River gorilla's wide range behavior, diet, and its grouping patterns. These data were all assessed from indirect evidence, such as feeding trails, nests, and feces.
The habitats of the Cross River gorilla are negatively affected by the drastic deforestation and fragmentation of the land. These unfortunate events leave the gorilla species with few options for survival. As a result of deforestation and fragmentation, there are drastic reductions in carrying capacity, in other words, the size of the territories these animals inhabit has been significantly reduced. Because the population of humans living in this area is high, the amount of resources available to the Cross River gorillas is limited. Even though this decrease in the availability of land may appear to be a problem, research studies have found that an adequate amount of rainforest still remains that is suitable and comfortable for this subspecies. If, however, human pressures and activities towards deforestation continue, these territories will continue to diminish and ultimately will not exist. Additional examples of human activity that threaten Cross River gorillas and, of course, other species, are hunting, logging, agriculture, fuel wood harvesting, clearance of lands for plantation and exploitation of natural resources. Gorillas and other primates are only a small part of the larger ecosystem and thus, they rely on many aspects of their habitat for survival. Furthermore, also because of their body size, they lack ability to adapt to new environment and they have a rather slow reproductive rate. Even though there is somewhat of a limited research on Cross River gorillas, there is enough to conclude that these animals are currently able to sustain survival. What is still under debate is the total number of Cross River gorillas that exist.
The Cross River gorilla is not only a critically endangered subspecies, as labeled by the IUCN, International Union for Conservation of Nature, but is under studied. The limited territories of their natural wildlife has led to that Cross River gorillas are approximately away from other gorilla populations. This region is around the Nigeria-Cameroon border in which there are highland areas creating geographic restrictions for these gorillas. During the 20th century, Cross River Gorillas were known to roam low land localities, however, due to habitat loss and other human made factors such as resource exploitation, Cross River Gorillas were driven to inhabit only hill areas. This led to a decrease of resource availability as well as land availability.
Most of the habit regions for Cross River gorillas are legally protected due to their critically endangered status. However, there are still areas that are not like between the Kagwene Mountain and Upper Mbulu, and around Mone North.
Behavior
A study published in 2007 in the American Journal of Primatology announced the discovery of the subspecies fighting back against possible threats from humans. They "found several instances of gorillas throwing sticks and clumps of grass". This is unusual. When encountered by humans, gorillas usually flee and rarely charge.
Cross River gorillas have certain nesting behaviors (i.e. mean nest group size, style of the nest, location of the nest, and nest reuse patterns) that depend on things such as their current habitat, climate, food source availability and risk of attack or vulnerability. According to research done on the Cross River gorillas living in the Kagwene Gorilla Sanctuary, there is a high correlation between whether a nest is constructed on the ground or in a tree and the season. From April up until November, Cross River gorillas are more likely to build their nests within a tree, and from November on they are more likely to build it on the ground. Overall, it was found that more nests built at night were built on the ground as opposed to in trees. This species is also more likely to construct nests during the wet season than the dry season, as well as construct more arboreal nests in the wet season. It was found that day nest construction was more common, especially in the wet season. Reuse of nesting sites was also found to be common, although it did not have any relation to the season. And, their mean nest group size is from four to seven individuals. Although, nest group size varies depending on the location of the species.
The groups of Cross River gorillas consist mainly of one male and six to seven females plus their offspring. Gorillas in lowlands are seen to have fewer offspring than those in the highlands. This is thought to be because of the hunting rate in the lowlands and infant mortality rate. The groups in the highlands are densely populated compared to those in the lowlands.
The Cross River gorilla's diet consists largely of fruit, herbaceous vegetation, liana, and tree bark. Much like their nesting habits, what they eat is contingent on the season. Observations of the gorilla indicate that it seems to prefer fruit, but will settle for other sources of nutrition during the dry season of about 4–5 months in northern regions. Cross River gorillas eat more liana and tree bark throughout the year, and less fruit during dry periods of scarcity.
Diet
The Cross River gorilla usually lives in small groups of 4–7 individuals with a few males and a few female members. Their diet usually consists of fruit, but in fruit scarce months, (August–September, November–January) their diet is primarily made up of terrestrial herbs, and the bark and leaves of climber and trees. Many of the Cross River gorilla food sources are very seasonal and thus their diets are filled with very dense, nutritious vegetation that is usually found near their nesting sites. It was found that the Afi Mountain group of Cross River gorilla diet mostly consisted Aframomum spp. (Zingiberaceae) herbs, but when available in the wet season, they preferred to eat Amorphophallus difformis (Araceae) over the Aframomum, showing preference for certain foods that were seasonal and also an affinity to the vegetation that was only found in their habitat.
Nesting
The nesting behavior of the Cross River gorilla was influenced by the environmental conditions, such as the climate, predation, herbaceous vegetation, absence of suitable nest building materials and seasonal fruits nearby. The gorillas did portray certain nesting habits like mean nest group sizes, size and type of nest created, as well as the reusing of certain nesting location nearby seasonal food sources. In Sunderland-Groves research on the nesting behavior of G. g. diehli at Kagwene Mountain they discovered that the nesting locations, whether on the ground or arboreal, were greatly influenced by the current season. During the dry season most of the nests were made on the ground, yet during the wet season the majority of the nests were made high in the trees, to provide protection from the rain. It was also found that the gorillas created more day nest during the wet season and reused nesting sites about 35% of the time. It was also found that the mean group size was 4–7 individual, yet the mean nest size at the sites was 12.4 nests and the most frequent number of nests was 13, showing some gorillas may have made multiple nests. The researchers also found nest sites with up to 26 nests, showing that sometimes multiple groups would nest together.
Aggression
The Cross River gorilla at the Kagwene Mountain in Cameroon has been observed using tools and it seems to be unique to the population in this region. They have been observed in three separate cases, in which they threw grass at the researchers, a detached branch and in a third case, in which an encounter with a man who threw rocks at them led them to throw back fistfuls of grass. All the encounters had the gorillas in the group observe the researchers and react to their presence with vocalizations then led to calm behavior in the parts of the gorillas and finally an approach by the male gorillas and the throwing of grass at the researcher. The researchers have stated that this throwing behavior might have arisen due to human contact in the field and farms surrounding the mountain and the ambivalent nature of the gorillas is due to the surrounding people not hunting the gorillas due to the folklore about the gorillas.
Geographical distribution
This subspecies is populated at the border between Nigeria and Cameroon, in both tropical and subtropical moist broadleaf forests which are also home to the Nigeria-Cameroon chimpanzee, another subspecies of great ape. The Cross River gorilla is the most western and northern form of gorilla, and is restricted to the forested hills and mountains of the Cameroon-Nigeria border region at the headwaters of the Cross River. It is separated by about from the nearest population of western lowland gorilla (Gorilla gorilla gorilla), and by around from the gorilla population in the Ebo Forest of Cameroon. Groups of these gorillas concentrate their activities in 11 localities across a range, though recent field surveys confirmed the presence of gorillas outside of their known localities suggesting a wider distribution within this range. This distribution is corroborated by genetic research, which has found evidence that many Cross River gorilla localities continue to maintain contact through the occasional dispersal of individuals.
The occurrence of Cross River gorillas has been confirmed in the Mbe Mountains and the Forest Reserves of Afi River, Boshi Extension, and Okwanggo of Nigeria’s Cross River State, and in the Takamanda and Mone River Forest Reserves, and the Mbulu Forest, of the Cameroon’s South West Province. These locations cover a mostly continuous forest area of about from Afi Mountain to Kagwene Mountain according to the 2007 regional action plan for Cross River gorilla conservation. Researchers and conservationists also postulate that there is a possible outlying locality in the forests near Bechati in the southeast. Today it’s estimated that their total population area covers about . Cross River gorillas have been known to cling to the Afi-to-Kagwene landscape because of its rugged terrain and high altitude which keeps it secluded from human interference.
However, a study conducted in 2013 found that Cross River gorillas also inhabit areas lower in altitude such as the Mawambi Hills. This site is about above sea level, which is much lower than their average niche at about above sea level.
Habitat loss
Cross River gorillas reside in small populations split from other subpopulations of the species. They occupy roughly 14 apparently geographically separated areas in a landscape of approximately of rugged terrain spanning the Nigeria–Cameroon border region with population sizes estimated at 75–110 in Nigeria, and 125–185 in Cameroon. Other sources of degradation such as hunting posed a much higher threat but habitat loss is now posing a much bigger threat to the species and their survival. Populations reside in areas of undisturbed dense forest which is scarce due to human occupation or use for natural resources. The Takamanda National Park and the Kagwene Gorilla Sanctuary are where most of the surviving members reside. Nest distribution was clearly influenced by anthropogenic factors within the sanctuary, with the disturbed southern section of the park avoided. Even though current wildlife laws regarding the areas are in place Cross River gorillas will not nest in areas near humans. Conservation and Eco-guards are empowered, by the government, to enforce wildlife laws within the sanctuary. A planned superhighway to the west of Ekuri community forest was rerouted in 2017, as the highway and its buffer zone would have had a significant impact on the remaining habitat.
Fragmented population
The increased population of human inhabitants and the expansion of grasslands (due to human activity) has caused a fragmentation of the species into many subpopulations. Many factors (mostly related to human activity) contributed to the fragmentation of the population, including the expansion of farmland, human occupation, lack of accessible habitat and the sparsity of suitable or favorable habitat. Due to this isolation, gene flow has begun to slow and subpopulations are suffering from a lack of gene diversity which could mean a long term issue. A study conducted by researchers found that gene flow accompanied the divergence of western lowland and Cross River gorillas until just 400 or so years ago, which rather supports a scenario in which intensifying human activities may have increased the isolation of these ape populations. The recent decrease in the Cross River population is accordingly most likely attributable to increasing anthropogenic pressure over the last several hundred years.
Hunting
A more recent phenomenon of the commercialization of bushmeat hunting has caused a large impact on the population. The hunting seems to be more intense within the lowlands and may have contributed to the concentration of gorillas within the highlands and their small population sizes. Despite laws preventing hunting, it still persists due to local consumption and trade to other countries. The laws are rarely effectively enforced, and due to state of the Cross River gorillas, any hunting has a large impact on the population and their survival. All hunting of the population is unsustainable.
Decline
The population of Cross River gorillas declined by 59% between the years 1995 and 2010, a greater decline over that period than any other subspecies of great ape. Apes such as the Cross River gorilla serve as indicators of problems in their environment and also help other species survive. The decline of this species started thirty years ago and has since continued to decline at an alarming rate. The danger of hunters has led these creatures to fear humans and human contact, so sightings of the Cross River gorilla are rare.
Cross River gorillas try to avoid nesting in grasslands and farms, which causes the remaining forest to become fragmented. However, the Cross River gorilla's habitat has become degraded and fragmented. Spatial scale coarse models fail to explain why the gorillas display a highly fragmented distribution within what appears to be a large, continuous area of suitable habitat. When fragmentation occurs, this causes a decrease or even elimination of migration between subpopulations, and therefore causes more inbreeding within a single population. This led to the loss of genetic diversity. This has negative effects on the long-term viability of population fragments, and by extension, the population as whole. Researchers use genetic methods to better understand the Cross River gorilla population. More specifically, certain loci within the genome were of major concern and they helped give the best insight into the subdivisions and dispersal of genetic variation across populations. Surveys suggest that the total population is about 300 individuals and is fragmented across about ten localities with limited reproductive contact. On top of this fragmentation, the Cross River gorilla is also threatened by hunting for bushmeat and for use of their bones for pseudoscientific medical purposes. For example, the exploitation of some primate species in Africa is prohibited because certain local communities embellished them with ritual meanings, and sometimes regarded them as totems, and also used them as tests for medicine.
Another threat to the Cross River gorilla is the harmful gorilla pet trade. To date, there is only one recorded Cross River gorilla in captivity, held at the . Although it seems like a small number of gorillas in captivity, the pet trade has posed a large threat to other species of gorillas in the past, and will likely endanger the Cross River gorilla. Since baby gorillas make preferable pets, hunters will often kill the adults protecting the baby.
The Cross River gorilla is critically endangered due to the combined threat of hunters and infection with Ebola. Even if the rate of Ebola mortality along with hunting was rebated, the promise of these gorillas making a fast recovery is unlikely. The reproduction rate of the Cross River gorilla is low and it is estimated that it will take 75 years for the population to fully recover. They are also threatened by loss of habitat due to mining, agriculture, and timber usage.
Despite this, conservationists are optimistic about the gorilla's chances for survival after capturing several adults and babies on film in spring 2020.
Conservation status
While all western gorillas are Critically Endangered (in the case of the western lowland gorilla due in part to the Ebola virus), the Cross River gorilla is the most endangered of the African apes. A 2014 survey estimated that less than 250 mature individuals were left in the wild. However, according to a 2012 survey conducted by Conservation International, the Cross River gorilla did not make "The World’s 25 Most Endangered Primates List". In efforts to conserve other species, it has already been determined that scattered populations should be brought together to prevent inbreeding. One problem with the scattered populations of Cross River gorillas is that they are surrounded by human populations that cause threats such as bushmeat hunting and habitat loss. Also, the protected habitats of Cross River gorillas along the Nigeria-Cameroon border are close to hunting regions, which increases the threat of extinction. The Cross River gorilla is especially significant to the ecosystem because they are excellent seed dispersers for certain tropical plant species that would otherwise face extinction.
In 2007, a survey was conducted in 5 villages in the aim of assessing taboos against hunting and eating these endangered species. In the Lebialem division of Cameroon, 86% of the population were in favour of the conservation of these species, seeing them as important morphological counterparts to humans which, in the case of their dying out, would cause the demise of their human totemic counterparts. One reason for the decline of cross-river gorillas was believed to be the decline of adhering to these totemic practices among younger people in the 18 to 25 year age range. Regardless, this taboo is still in place and it still strongly discourages hunting of those endangered species. These totemic traditions are believed to be critical to the species' continued survival and well-being. The recurrent revival of these beliefs and practices are seen was a way to reinforce the conservation of these species, especially in the absence of real law enforcement due to a lack of governance. While this could also foster support from different villages and communities, and preserve their culture, care must be taken when selecting these practices as some could encourage their killing. Largely because of many taboos, in the past 15 years, there has not been any Cross River gorilla hunting incidents. The presence of a taboo prohibiting their persecution has been considered an extremely successful local conservation strategy.
A workshop for the conservation of the Cross River gorilla was organized by the Wildlife Conservation Society and the Nigerian Conservation Foundation was held in Nigeria in April 2001. The overall goal of the workshop was to enhance the species’ chance for survival because of its rare and distinct features from other western gorillas. The most important outcomes of the workshop were a list of improvements recommended to save the species and also establishing the need for regular meetings between the governments and conservation groups of Cameroon and Nigeria to achieve maximum efficiency in their conservation efforts.
In 2008, the government of Cameroon created the Takamanda National Park on the border of Nigeria and Cameroon as an attempt to protect these gorillas. The park now forms part of an important trans-boundary protected area with Nigeria’s Cross River National Park, safeguarding an estimated 115 gorillas — a third of the Cross River gorilla population — along with other rare species. The hope is that the gorillas should be able to move between the Takamanda reserve in Cameroon over the border to Nigeria's Cross River National Park.
The Kagwene Gorilla Sanctuary was created by the Cameroonian government on April 3, 2008 as part of the IUCN’s Cross River gorilla action plan. It protects 19.44 km2 of land, and is located between the forests of Mbulu and Nijikwa in western Cameroon. It consists of rugged, mountainous terrain and represents the highest altitudinal extent of the Cross River gorilla's distribution, with the highest point at above sea level. Only about half of its land is a prime gorilla habitat, while the rest includes grassland or cultivation not suitable for the species. Due to its sanctuary status, it was expected to be provided with a conservator and eco-guards to enforce wildlife laws within its perimeters.
| Biology and health sciences | Apes | Animals |
1719876 | https://en.wikipedia.org/wiki/Yellowstone%20hotspot | Yellowstone hotspot | The Yellowstone hotspot is a volcanic hotspot in the United States responsible for large scale volcanism in Idaho, Montana, Nevada, Oregon, and Wyoming, formed as the North American tectonic plate moved over it. It formed the eastern Snake River Plain through a succession of caldera-forming eruptions. The resulting calderas include the Island Park Caldera, Henry's Fork Caldera, and the Bruneau-Jarbidge caldera. The hotspot currently lies under the Yellowstone Caldera. The hotspot's most recent caldera-forming supereruption, known as the Lava Creek Eruption, took place 640,000 years ago and created the Lava Creek Tuff, and the most recent Yellowstone Caldera. The Yellowstone hotspot is one of a few volcanic hotspots underlying the North American tectonic plate; another example is the Anahim hotspot.
Snake River Plain
The eastern Snake River Plain is a topographic depression that cuts across Basin and Range Mountain structures, more or less parallel to North American Plate motion. Beneath more recent basalts are rhyolite lavas and ignimbrites that erupted as the lithosphere passed over the hotspot. Younger volcanoes that erupted after passing over the hotspot covered the plain with young basalt lava flows in places, including Craters of the Moon National Monument and Preserve.
The central Snake River plain is similar to the eastern plain, but differs by having thick sections of interbedded lacustrine (lake) and fluvial (stream) sediments, including the Hagerman Fossil Beds.
Nevada–Oregon calderas
Although the McDermitt volcanic field on the Nevada–Oregon border is frequently shown as the site of the initial impingement of the Yellowstone Hotspot, new geochronology and mapping demonstrates that the area affected by this mid-Miocene volcanism is significantly larger than previously appreciated. Three silicic calderas have been newly identified in northwest Nevada, west of the McDermitt volcanic field as well as the Virgin Valley Caldera. These calderas, along with the Virgin Valley Caldera and McDermitt Caldera, are interpreted to have formed during a short interval 16.5–15.5 million years ago, in the waning stage of the Steens flood basalt volcanism. The northwest Nevada calderas have diameters ranging from 15 to 26 km and deposited high temperature rhyolite ignimbrites over approximately 5000 km2.
As the hotspot drifted beneath what is now Nevada and Oregon, it increased ecological beta diversity locally by fragmenting previously connected habitats and increasing topographic diversity in western North America.
The Bruneau-Jarbidge volcanic field erupted between ten and twelve million years ago, spreading a thick blanket of ash in the Bruneau-Jarbidge event and forming a wide caldera. Animals were suffocated and burned in pyroclastic flows within a hundred miles of the event, and died of slow suffocation and starvation much farther away, notably at Ashfall Fossil Beds, located 1000 miles downwind in northeastern Nebraska, where a foot of ash was deposited. There, two hundred fossilized rhinoceros and many other animals were preserved in two meters of volcanic ash. By its characteristic chemical fingerprint and the distinctive size and shape of its crystals and glass shards, the volcano stands out among dozens of prominent ashfall horizons laid down in the Cretaceous, Paleogene, and Neogene periods of central North America. The event responsible for this fall of volcanic ash was identified as Bruneau-Jarbidge. Prevailing westerlies deposited distal ashfall over a vast area of the Great Plains.
Volcanic fields
Twin Falls and Picabo volcanic fields
The Twin Falls and Picabo volcanic fields were active about 10 million years ago. The Picabo Caldera was notable for producing the Arbon Valley Tuff 10.2 million years ago.
Heise volcanic field
The Heise volcanic field of eastern Idaho produced explosive caldera-forming eruptions which began 6.6 million years ago and lasted for more than 2 million years, sequentially producing four large-volume rhyolitic eruptions. The first three caldera-forming rhyolites – Blacktail Tuff, Walcott Tuff and Conant Creek Tuff – totaled at least 2250 km3 of erupted magma. The final, extremely voluminous, caldera-forming eruption – the Kilgore Tuff – which erupted 1800 km3 of ash, occurred 4.5 million years ago.
Yellowstone Plateau
The Yellowstone Plateau volcanic field is composed of four adjacent calderas. West Thumb Lake is itself formed by a smaller caldera which erupted 174,000 years ago. (See Yellowstone Caldera map.) The Henry's Fork Caldera in Idaho was formed in an eruption of more than 1.3 million years ago, and is the source of the Mesa Falls Tuff. The Henry's Fork Caldera is nested inside of the Island Park Caldera and the calderas share a rim on the western side. The earlier Island Park Caldera is much larger and more oval and extends well into Yellowstone Park. Although much smaller than the Island Park Caldera, the Henry's Fork Caldera is still sizeable at long and wide and its curved rim is plainly visible from many locations in the Island Park area.
Of the many calderas formed by the Yellowstone Hotspot, including the later Yellowstone Caldera, the Henry's Fork Caldera is the only one that is currently clearly visible. The Henry's Fork of the Snake River flows through the Henry's Fork Caldera and drops out at Upper and Lower Mesa Falls. The caldera is bounded by the Ashton Hill on the south, Big Bend Ridge and Bishop Mountain on the west, by Thurburn Ridge on the North and by Black Mountain and the Madison Plateau on the east. The Henry's Fork caldera is in an area called Island Park. Harriman State Park is situated in the caldera.
The Island Park Caldera is older and much larger than the Henry's Fork Caldera with approximate dimensions of by . It is the source of the Huckleberry Ridge Tuff that is found from southern California to the Mississippi River near St. Louis. This supereruption occurred 2.1 million years BP and produced 2500 km3 (700 mi³) of ash. The Island Park Caldera is sometimes referred to as the First Phase Yellowstone Caldera or the Huckleberry Ridge Caldera. The youngest of the hotspot calderas, the Yellowstone Caldera, formed 640,000 years ago and is about by wide. Non-explosive eruptions of lava and less-violent explosive eruptions have occurred in and near the Yellowstone Caldera since the last super eruption. The most recent lava flow occurred about 70,000 years ago, while the largest violent eruption excavated the West Thumb of Lake Yellowstone around 150,000 years ago. Smaller steam explosions occur as well – an explosion 13,800 years ago left a 5 kilometer diameter crater at Mary Bay on the edge of Yellowstone Lake.
Both the Heise and Yellowstone volcanic fields produced a series of caldera-forming eruptions characterised by magmas with so-called "normal" oxygen isotope signatures (with heavy oxygen-18 isotopes) and a series of predominantly post-caldera magmas with so-called "light" oxygen isotope signatures (characterised as low in heavy oxygen-18 isotopes). The final stage of volcanism at Heise was marked by "light" magma eruptions. If Heise is any indication, this could mean that the Yellowstone Caldera has entered its final stage, but the volcano might still exit with a climactic fourth caldera event analogous to the fourth and final caldera-forming eruption of Heise (the Kilgore Tuff) – which was also made up of so-called "light" magmas. The appearance of "light" magmas would seem to indicate that the uppermost portion of the continental crust has largely been consumed by the earlier caldera- forming events, exhausting the melting potential of the crust above the mantle plume. In this case Yellowstone could be expiring. It could be another 1–2 million years (as the North American Plate moves across the Yellowstone hotspot) before a new supervolcano is born to the northeast, and the Yellowstone Plateau volcanic field joins the ranks of its deceased ancestors in the Snake River Plain.
A 2020 study suggests that the hotspot may be waning.
Eruptive history
Wapi Lava field and King's Bowl blowout, northeast of Rupert, Idaho; 2.270 ka ±0.15. (2,270 years ago)
Hell's Half Acre lava field, west to southwest of Idaho Falls; 3.250 ka ±0.15. (3,250 years ago)
Shoshone lava field, North of Twin Falls, Idaho; 8.400 ka ±0.3.
Craters of the Moon National Monument and Preserve; Great Rift of Idaho; the lava field was formed during eight eruptive episodes between about 15 and 2 ka.
Kings Bowl and Wapi lava fields formed about 2.250 ka.
Yellowstone Caldera; between 70 and 150 ka; intracaldera rhyolitic lava flows.
Yellowstone Park
Yellowstone Caldera (size: 45 x 85 km); 640 ka; VEI 8; more than of Lava Creek Tuff.
Henry's Fork Caldera (size: 16 km wide); 1.3 Ma; VEI 7; of Mesa Falls Tuff.
Island Park Caldera
Harriman State Park
Island Park Caldera (size: 100 x 50 km); 2.1 Ma; VEI 8; of Huckleberry Ridge Tuff.
Heise volcanic field, Idaho:
Kilgore Caldera (size: 80 x 60 km); VEI 8; of Kilgore Tuff; 4.45 Ma ±0.05.
4.49 Ma tuff of Heise
5.37 Ma tuff of Elkhorn Springs
5.51 Ma ±0.13 (Conant Creek Tuff) (but Anders (2009): 5.94 Ma)
5.6 Ma; of Blue Creek Tuff.
5.81 Ma tuff of Wolverine Creek
6.27 Ma ±0.04 (Walcott Tuff).
6.57 Ma tuff of Edie School
Blacktail Caldera (size: 100 x 60 km); 6.62 Ma ±0.03; of Blacktail Tuff.
7.48 Ma tuff of America Falls
8.72 Ma Grey's landing Ignimbrite; VEI 8. At least of volcanic material.
8.75 Ma tuff of Lost River Sinks
8.99 Ma, McMullen Supereruption; VEI 8. At least of volcanic material.
9.17 Ma tuff of Kyle Canyon
9.34 Ma tuff of Little Chokecherry Canyon
Twin Falls volcanic field, Twin Falls County, Idaho; 8.6 to 10 Ma.
Picabo volcanic field, Picabo, Idaho; 10.09 Ma (Arbon Valley Tuff A) and 10.21 Ma ±0.03 (Arbon Valley Tuff B).
Bruneau-Jarbidge volcanic field, Bruneau River/ Jarbidge River, Idaho; 10.0 to 12.5 Ma; Ashfall Fossil Beds eruption.
Owyhee-Humboldt volcanic field, Owyhee County, Idaho, Nevada, and Oregon; around 12.8 to 13.9 Ma.
McDermitt volcanic field, Orevada rift, McDermitt, Nevada/ Oregon (five overlapping and nested calderas; satellitic to these are two additional calderas), :
Trout Creek Mountains, East of the Pueblo Mountains, Whitehorse Caldera (size: 15 km wide), Oregon; 15 Ma; of Whitehorse Creek Tuff.
Jordan Meadow Caldera, (size: 10–15 km wide); 15.6 Ma; Longridge Tuff member 2–3.
Longridge Caldera, (size: 33 km wide); 15.6 Ma; Longridge Tuff member 5.
Calavera Caldera, (size: 17 km wide); 15.7 Ma; of Double H Tuff.
Trout Creek Mountains, Pueblo Caldera (size: 20 x 10 km), Oregon; 15.8 Ma; of Trout Creek Mountains Tuff.
Hoppin Peaks Caldera, 16 Ma; Hoppin Peaks Tuff.
Washburn Caldera, (size: 30 x 25 km wide), Oregon; 16.548 Ma; of Oregon Canyon Tuff.
Yellowstone hotspot (?), Lake Owyhee volcanic field; 15.0 to 15.5 Ma.
Yellowstone hotspot (?), Northwest Nevada volcanic field, Virgin Valley, High Rock, Hog Ranch, and unnamed calderas; West of the Pine Forest Range, Nevada; 15.5 to 16.5 Ma; Tuffs: Idaho Canyon, Ashdown, Summit Lake, and Soldier Meadow.
Columbia River Basalt Province: Yellowstone hotspot sets off a huge pulse of volcanic activity, the first eruptions were near the Oregon-Idaho-Washington border. Columbia River and Steens flood basalts, Pueblo, and Malheur Gorge-region, Pueblo Mountains, Steens Mountain, Washington, Oregon, and Idaho; most vigorous eruptions were from 14 to 17 Ma; of lava.
Columbia River flood basalts,
Steens flood basalts,
Crescent volcanics, Olympic Peninsula/ southern Vancouver Island, 50–60 Ma.
Siletz River Volcanics, Oregon Coast Range, a sequence of basaltic pillow lavas.
Carmacks Group, Yukon, , 70 Ma.
| Physical sciences | Geologic features | Earth science |
1720511 | https://en.wikipedia.org/wiki/Hardiness%20zone | Hardiness zone | A hardiness zone is a geographic area defined as having a certain average annual minimum temperature, a factor relevant to the survival of many plants. In some systems other statistics are included in the calculations. The original and most widely used system, developed by the United States Department of Agriculture (USDA) as a rough guide for landscaping and gardening, defines 13 zones by long-term average annual extreme minimum temperatures. It has been adapted by and to other countries (such as Canada) in various forms. A plant may be described as "hardy to zone 10": this means that the plant can withstand a minimum temperature of .
Unless otherwise specified, in American contexts "hardiness zone" or simply "zone" usually refers to the USDA scale. However, some confusion can exist in discussing buildings and HVAC, where "climate zone" can refer to the International Energy Conservation Code zones, where Zone 1 is warm and Zone 8 is cold.
Other hardiness rating schemes have been developed as well, such as the UK Royal Horticultural Society and US Sunset Western Garden Book systems. A heat zone (see below) is instead defined by annual high temperatures; the American Horticultural Society (AHS) heat zones use the average number of days per year when the temperature exceeds .
United States hardiness zones (USDA scale)
The USDA system was originally developed to aid gardeners and landscapers in the United States.
In the United States, most of the warmer zones (zones 9, 10, and 11) are located in the deep southern half of the country and on the southern coastal margins. Higher zones can be found in Hawaii (up to 12) and Puerto Rico (up to 13). The southern middle portion of the mainland and central coastal areas are in the middle zones (zones 8, 7, and 6). The far northern portion on the central interior of the mainland have some of the coldest zones (zones 5, 4, and small area of zone 3) and often have much less consistent range of temperatures in winter due to being more continental, especially further west with higher diurnal temperature variations, and thus the zone map has its limitations in these areas. Lower zones can be found in Alaska (down to 1). The low latitude and often stable weather in Florida, the Gulf Coast, and southern Arizona and California, are responsible for the rarity of episodes of severe cold relative to normal in those areas. The warmest zone in the 48 contiguous states is the Florida Keys (11b) and the coldest is in north-central Minnesota (2b). A couple of locations on the northern coast of Puerto Rico have the warmest hardiness zone in the United States at 13b. Conversely, isolated inland areas of Alaska have the coldest hardiness zone in the United States at 1a.
Definitions
History
The first attempts to create a geographical hardiness zone system were undertaken by two researchers at the Arnold Arboretum in Boston; the first was published in 1927 by Alfred Rehder, and the second by Donald Wyman in 1938. The Arnold map was subsequently updated in 1951, 1967, and finally 1971, but eventually fell out of use completely.
The modern USDA system began at the US National Arboretum in Washington. The first map was issued in 1960, and revised in 1965. It used uniform ranges, and gradually became widespread among American gardeners.
The USDA map was revised and reissued in 1990 with freshly available climate data, this time with five-degree distinctions dividing each zone into new "a" and "b" subdivisions.
In 2003, the American Horticultural Society (AHS) produced a draft revised map, using temperature data collected from July 1986 to March 2002. The 2003 map placed many areas approximately a half-zone higher (warmer) than the USDA's 1990 map. Reviewers noted the map zones appeared to be closer to the original USDA 1960 map in its overall zone delineations. Their map purported to show finer detail, for example, reflecting urban heat islands by showing the downtown areas of several cities (e.g., Baltimore, Maryland; Washington, D.C., and Atlantic City, New Jersey) as a full zone warmer than outlying areas. The map excluded the detailed a/b half-zones introduced in the USDA's 1990 map, an omission widely criticized by horticulturists and gardeners due to the coarseness of the resulting map. The USDA rejected the AHS 2003 draft map and created its own map in an interactive computer format, which the American Horticultural Society now uses.
In 2006, the Arbor Day Foundation released an update of U.S. hardiness zones, using mostly the same data as the AHS. It revised hardiness zones, reflecting generally warmer recent temperatures in many parts of the country, and appeared similar to the AHS 2003 draft. The Foundation also did away with the more detailed a/b half-zone delineations.
In 2012 the USDA updated their plant hardiness map based on 1976–2005 weather data, using a longer period of data to smooth out year-to-year weather fluctuations. Two new zones (12 and 13) were added to better define and improve information sharing on tropical and semitropical plants, they also appear on the maps of Hawaii and Puerto Rico. There is a very small spot east of San Juan, Puerto Rico, that includes the airport in coastal Carolina, where the mean minimum is 67 degrees F (19 C), which is classified as hardiness Zone 13b, the highest category, with temperatures rarely below . The map has a higher resolution than previous editions, and is able to show local variations due to factors such as elevation or large bodies of water. Many zone boundaries were changed as a result of the more recent data, as well as new mapping methods and additional information gathered. Many areas were a half-zone warmer than the previous 1990 map. The 2012 map was created digitally for the internet, and includes a ZIP Code zone finder and an interactive map.
In 2015, the Arbor Day Foundation revised another map, also with no a and b subdivisions, showing many areas having zones even warmer, with the most notable changes in the Mid Atlantic and Northeast, showing cities like Philadelphia, New York City and Washington D.C. in zone 8, due to their urban heat islands.
In November 2023, the USDA released another updated version of their plant hardiness map, based on 1991–2020 weather data across the United States. The updated map shows continued northward movement of hardiness zones, reflecting a continued warming trend in the United States' climate.
Selected U.S. cities
The USDA plant hardiness zones for selected U.S. cities as based on the 2023 map are the following:
Limitations
As the USDA system is based entirely on average annual extreme minimum temperature in an area, it is limited in its ability to describe the climatic conditions a gardener may have to account for in a particular area: there are many other factors that determine whether or not a given plant can survive in a given zone.
Zone information alone is often not adequate for predicting winter survival, since factors such as frost dates and frequency of snow cover can vary widely between regions. Even the extreme minimum itself may not be useful when comparing regions in widely different climate zones. As an extreme example, due to the Gulf Stream most of the United Kingdom is in zones 8–9, while in the US, zones 8–9 include regions such as the subtropical coastal areas of the southeastern US and Mojave and Chihuahuan inland deserts, thus an American gardener in such an area may only have to plan for several nights of cold temperatures per year, while their British counterpart may have to plan for several months.
In addition, the zones do not incorporate any information about duration of cold temperatures, summer temperatures, or sun intensity insolation; thus sites which may have the same mean winter minima on the few coldest nights and be in the same garden zone, but have markedly different climates. For example, zone 8 covers coastal, high latitude, cool summer locations like Seattle and London, as well as lower latitude, hot-summer climates like Charleston and Madrid. Farmers, gardeners, and landscapers in the former two must plan for entirely different growing conditions from those in the latter, in terms of length of hot weather and sun intensity. Coastal Ireland and central Florida are both Zone 10, but have radically different climates.
The hardiness scales do not take into account the reliability of snow cover in the colder zones. Snow acts as an insulator against extreme cold, protecting the root system of hibernating plants. If the snow cover is reliable, the actual temperature to which the roots are exposed will not be as low as the hardiness zone number would indicate. As an example, Quebec City in Canada is located in zone 4, but can rely on a significant snow cover every year, making it possible to cultivate plants normally rated for zones 5 or 6. But, in Montreal, located to the southwest in zone 5, it is sometimes difficult to cultivate plants adapted to the zone because of the unreliable snow cover.
Many plants may survive in a locality but will not flower if the day length is insufficient or if they require vernalization (a particular duration of low temperature).
There are many other climate parameters that a farmer, gardener, or landscaper may need to take into account as well, such as humidity, precipitation, storms, rainy-dry cycles or monsoons, and site considerations such as soil type, soil drainage and water retention, water table, tilt towards or away from the sun, natural or humanmade protection from excessive sun, snow, frost, and wind, etc. The annual extreme minimum temperature is a useful indicator, but ultimately only one factor among many for plant growth and survival.
Alternatives
An alternative means of describing plant hardiness is to use "indicator plants". In this method, common plants with known limits to their range are used.
Sunset publishes a series that breaks up climate zones more finely than the USDA zones, identifying 45 distinct zones in the US, incorporating ranges of temperatures in all seasons, precipitation, wind patterns, elevation, and length and structure of the growing season.
In addition, the Köppen climate classification system can be used as a more general guide to growing conditions when considering large areas of the Earth's surface or attempting to make comparisons between different continents. The Trewartha climate classification is often a good "real world" concept of climates and their relation to plants and their average growing conditions.
Australian hardiness zones
The Australian National Botanic Gardens have devised another system keeping with Australian conditions. The zones are defined by steps of 5 degrees Celsius, from −15–−10 °C for zone 1 to 15–20 °C for zone 7. They are numerically about 6 lower than the USDA system. For example, Australian zone 3 is roughly equivalent to USDA zone 9. The higher Australian zone numbers had no US equivalents prior to the 2012 addition by USDA of zones 12 and 13.
The spread of weather stations may be insufficient and too many places with different climates are lumped together. Only 738 Australian stations have records of more than ten years (one station per ), though more populated areas have relatively fewer hectares per station. Mount Isa has three climatic stations with more than a ten-year record. One is in zone 4a, one in zone 4b, and the other is in zone 5a. Sydney residents are split between zones 3a and 4b. Different locations in the same city are suitable for different plants.
Canadian hardiness zones
Climate variables that reflect the capacity and detriments to plant growth are used to develop an index that is mapped to Canada's Plant Hardiness Zones. This index comes from a formula originally developed by Ouellet and Sherk in the mid-1960s.
The formula used is:
where:
Y = estimated index of suitability
X1 = monthly mean of the daily minimum temperatures (°C) of the coldest month
X2 = mean frost free period above 0 °C in days
X3 = amount of rainfall (R) from June to November, inclusive, in terms of R/(R+a) where a=25.4 if R is in millimeters and a=1 if R is in inches
X4 = monthly mean of the daily maximum temperatures (°C) of the warmest month
X5 = winter factor expressed in terms of (0 °C – X1)Rjan where Rjan represents the rainfall in January expressed in mm
X6 = mean maximum snow depth in terms of S/(S+a) where a=25.4 if S is in millimeters and a=1 if S is in inches
X7 = maximum wind gust in (km/h) in 30 years.
For practical purposes, Canada has adopted the American hardiness zone classification system. The 1990 version of the USDA Plant Hardiness Zone Map included Canada and Mexico, but they were removed with the 2012 update to focus on the United States and Puerto Rico. The Canadian government publishes both Canadian and USDA-style zone maps.
European hardiness zones
Selected European cities
The table below provides USDA hardiness zone data for selected European cities:
Britain and Ireland
USDA zones do not work particularly well in Ireland and Great Britain as they are designed for continental climates and subtropical climates. The high latitude, weaker solar intensity, and cooler summers must be considered when comparing to US equivalent. New growth may be insufficient or fail to harden off affecting winter survival in the shorter and much cooler summers of Ireland and Britain.
Owing to the moderating effect of the North Atlantic Current on the Irish and British temperate maritime climate, Britain, and Ireland even more so, have milder winters than their northerly position would otherwise afford. This means that the USDA hardiness zones relevant to Britain and Ireland are quite high, from 7 to 10, as shown below.
In 2012 the United Kingdom's Royal Horticultural Society introduced new hardiness ratings for plants, not places. These run from H7, the hardiest (tolerant of temperatures below ) to H1a (needing temperatures above ). The RHS hardiness ratings are based on absolute minimum winter temperatures (in °C) rather than the long-term average annual extreme minimum temperatures that define USDA zones.
Scandinavia and Baltic Sea Region
Scandinavia lies at the same latitude as Alaska or Greenland, but the effect of the warm North Atlantic Current is even more pronounced here than it is in Britain and Ireland. Save for a very small spot around Karasjok Municipality, Norway, which is in zone 2, nowhere in the Arctic part of Scandinavia gets below zone 3. The Faroe Islands, at 62–63°N are in zone 8, as are the outer Lofoten Islands at 68°N. Tromsø, a coastal city in Norway at 70°N, is in zone 7, and even Longyearbyen, the northernmost true city in the world at 78°N, is still in zone 4. All these coastal locations have one thing in common, though, which are cool, damp summers, with temperatures rarely exceeding , or in Longyearbyen. This shows the importance of taking heat zones into account for better understanding of what may or may not grow.
In Sweden and Finland generally, at sea level to , zone 3 is north of the Arctic Circle, including cities like Karesuando and Pajala. Kiruna is the major exception here, which being located on a hill above frost traps, is in zone 5. Zone 4 lies between the Arctic Circle and about 64–66°N, with cities such as Oulu, Rovaniemi and Jokkmokk, zone 5 (south to 61–62°N) contains cities such as Tampere, Umeå, and Östersund. Zone 6 covers the south of mainland Finland, Sweden north of 60°N, and the high plateau of Småland further south. Here one will find cities such as Gävle, Örebro, Sundsvall, and Helsinki. Åland, as well as coastal southern Sweden, and the Stockholm area are in zone 7. The west coast of Sweden (Gothenburg and southwards) enjoys particularly mild winters and lies in zone 7, therefore being friendly to some hardy exotic species (found, for example, in the Gothenburg Botanical Garden), the southeast coast of Sweden has a colder winter due to the absence of the Gulf Stream.
Central Europe
Central Europe is a good example of a transition from an oceanic climate to a continental climate, which reflects in the tendency of the hardiness zones to decrease mainly eastwards instead of northwards. Also, the plateaus and low mountain ranges in this region have a significant impact on how cold it might get during winter. Generally speaking, the hardiness zones are high considering the latitude of the region, although not as high as Northern Europe with the Shetland Islands where zone 9 extends to over 60°N. In Central Europe, the relevant zones decrease from zone 8 on the Belgian, Dutch, and German North Sea coast, with the exception of some of the Frisian Islands (notably Vlieland and Terschelling), the island of Helgoland, and some of the islands in the Rhine-Scheldt estuary, which are in zone 9, to zone 5 around Suwałki, Podlachia on the far eastern border between Poland and Lithuania. Some isolated, high elevation areas of the Alps and Carpathians may even go down to zone 3 or 4. An extreme example of a cold sink is Funtensee, Bavaria which is at least in zone 3. Another notable example is Waksmund, a small village in the Polish Carpathians, which regularly reaches during winter on calm nights when cold and heavy airmasses from the surrounding Gorce and Tatra Mountains descend down the slopes to this low-lying valley, creating extremes which can be up to colder than nearby Nowy Targ or Białka Tatrzańska, which are both higher up in elevation. Waksmund is in zone 3b while nearby Kraków, only to the north and lower is in zone 6a. These examples prove that local topography can have a pronounced effect on temperature and thus on what is possible to grow in a specific region.
Southern Europe
The southern European marker plant for climate as well as cultural indicator is the olive tree, which cannot withstand long periods below freezing so its cultivation area matches the cool winter zone. The Mediterranean Sea acts as a temperature regulator, so this area is generally warmer than other parts of the continent; except in mountainous areas where the sea effect lowers, it belongs in zones 8–10; however, southern Balkans (mountainous Western and Eastern Serbia, continental Croatia, and Bulgaria) are colder in winter and are in zones 6–7. The Croatian (Dalmatian) coast, Albania, and northern Greece are in zones 8–9, as are central-northern Italy (hills and some spots in Po Valley are however colder) and southern France; Central Iberia is 8–9 (some highland areas are slightly colder). The Spanish and Portuguese Atlantic coast, much of Andalusia and Murcia, coastal and slightly inland southern Valencian Community, a part of coastal Catalonia, the Balearic Islands, southwestern Sardinia, most of Sicily, coastal southern Italy, some areas around Albania, coastal Cyprus and southwestern Greece are in zone 10. In Europe, the zone 11a is limited only to a few spots. In the Iberian Peninsula, it can be found on the southern coast, in small Spanish areas inside the provinces of Almería, Cádiz, Granada, Málaga and Murcia. In Portugal, zone 11a can be found in the Southwest on a few unpopulated sites around the municipalities of Lagos and Vila do Bispo. In mainland Greece, zone 11a can be found in Monemvasia and also in areas of Crete, the Dodecanese, Cyclades and some Argo-Saronic Gulf islands. The Mediterranean islands of Malta, Lampedusa and Linosa belong to zone 11a as well as a few areas on the southernmost coast of Cyprus. The Balkan area is also more prone to cold snaps and episodes of unseasonable warmth. For instance, despite having similar daily means and temperature amplitudes to Nantucket, Massachusetts, for each month, Sarajevo has recorded below-freezing temperatures in every month of the year.
Macaronesia
Macaronesia consists of four archipelagos: The Azores, the Canary Islands, Cape Verde and Madeira. At lower altitudes and coastal areas, the Portuguese Azores and Madeira belong to zones 10b/11b and 11a/11b respectively. The Azores range from 9a to 11b and Madeira ranges from 9b to 12a, 9a and 9b found inland on the highest altitudes such as Mount Pico in the Azores or Pico Ruivo in Madeira. The Spanish Canary Islands hardiness zones range from 8a to 12b depending on the location and the altitude. The islands are generally part of zones 11b/12a in lower altitudes and coastal areas, reaching up to 12b in the southernmost coasts or populated coastal parts such as the city of Las Palmas. The lowest hardiness areas are found in Teide National Park being at 8a/8b for its very high altitude. Teide peak is the highest peak of Macaronesia.
The Cape Verde islands, located much further south inside the tropics, have hardiness zones that range from 12 to 13 in the coastal areas, while the lowest hardiness zone is found in the island of Fogo, in the country's highest peak Pico do Fogo.
American Horticultural Society heat zones
In addition to the USDA Hardiness zones there are American Horticultural Society (AHS) heat zones.
The criterion is the average number of days per year when the temperature exceeds . The AHS Heat Zone Map for the US is available on the American Horticultural Society website.
South Africa
South Africa has five horticultural or climatic zones. The zones are defined by minimum temperature.
Effects of climate change
The USDA map published in 2012 shows that most of the US has become a half zone () hotter in winter compared to the 1990 release. Again, with the publication of the 2023 map, about half of the US has shifted a half zone warmer. Research in 2016 suggests that USDA plant hardiness zones will shift even further northward under climate change.
| Technology | Horticulture | null |
682973 | https://en.wikipedia.org/wiki/Tabanidae | Tabanidae | Horse flies and deer flies are true flies in the family Tabanidae in the insect order Diptera. The adults are often large and agile in flight. Only female horseflies bite land vertebrates, including humans, to obtain blood. They prefer to fly in sunlight, avoiding dark and shady areas, and are inactive at night. They are found all over the world except for some islands and the polar regions (Hawaii, Greenland, Iceland). Both horse-flies and botflies (Oestridae) are sometimes referred to as gadflies.
Adult horse-flies feed on nectar and plant exudates; males have weak mouthparts, but females have mouthparts strong enough to bite large animals. This is for the purpose of obtaining enough protein from blood to produce eggs. The mouthparts of females are formed into a stout stabbing organ with two pairs of sharp cutting blades, and a spongelike part used to lap up the blood that flows from the wound. The larvae are predaceous and grow in semiaquatic habitats.
Female horse-flies can transfer blood-borne diseases from one animal to another through their feeding habit. In areas where those diseases occur, they have been known to carry equine infectious anaemia virus, some trypanosomes, the filarial worm Loa loa, anthrax among cattle and sheep, and tularemia. They can reduce growth rates in cattle and lower the milk output of cows if suitable shelters are not provided.
Horse-flies have appeared in literature ever since Aeschylus in Ancient Greece mentioned them driving people to "madness" through their persistent pursuit.
Common names
Tabanidae are known by a large number of common names. The subfamily Chrysopsinae is known as deer flies, perhaps because of their abundance on moorland where deer roam, and buffalo-flies, moose-flies and elephant-flies emanate from other parts of the world where these animals are found. The term "horse-fly" refers primarily to Tabaninae that are typically larger and stouter, and that lack the banded wings deer flies have. Other common names include tabanids, gadflies, green-headed flies, and green flies.
The word "Tabanus" was first recorded by Pliny the Younger and has survived as the generic name. In general, country folk did not distinguish between the various biting insects that irritated their cattle and called them all "gad-flies", from the word "gad" meaning "spike". Other common names include "cleg[g]", "gleg" or "clag", which come from Old Norse and may have originated from the Vikings. Other names such as "stouts" refer to the wide bodies of the insects and "dun-flies" to their sombre colouring. In Australia and the UK they are also known as March flies, a name used in other Anglophonic countries to refer to the non-bloodsucking Bibionidae.
Description
Adult tabanids are large flies with prominent compound eyes, short antennae composed of three segments, and wide bodies. In females, the eyes are widely separated; in males, however, they are almost touching. The eyes are often patterned and brightly coloured in living tabanids but appear dull in preserved specimens. The terminal segment of the antennae is pointed and annulated, appearing to be composed of several tapering rings. There are no hairs or arista stemming from the antennae. Both the head and thorax are clad in short hairs, but no bristles are on the body. The membranous forewings are clear, either shaded uniformly grey or brown, or patterned in some species; they have a basal lobe (or calypter) that covers the modified knob-like hindwings or halteres. The tips of the legs have two lobes on the sides (pulvilli) and a central lobe (empodium) in addition to two claws that enable them to grip to surfaces. Species recognition is based on details of head structures (antennae, frons, and maxillae), wing venation, and body patterning; minute variations of surface structure cause subtle alterations of the overlying hairs, which alters the appearance of the body.
Tabanid species range from medium-sized to very large, robust insects. Most have a body length between , with the largest having a wingspan of . Deer flies in the genus Chrysops are up to long, have yellow to black bodies and striped abdomens, and membranous wings with dark patches. Horse-flies (genus Tabanus) are larger, up to in length and are mostly dark brown or black, with dark eyes, often with a metallic sheen. Yellow flies (genus Diachlorus) are similar in shape to deer flies, but have yellowish bodies and the eyes are purplish-black with a green sheen. Some species in the subfamily Pangoniinae have an exceptionally long proboscis (tubular mouthpart).
The larvae are long and cylindrical or spindle-shaped with small heads and 12 body segments. They have rings of tubercles (warty outgrowths) known as pseudopods around the segments, and also bands of short setae (bristles). The posterior tip of each larva has a breathing siphon and a bulbous area known as Graber's organ. The outlines of the adult insect's head and wings are visible through the pupa, which has seven moveable abdominal segments, all except the front one of which bears a band of setae. The posterior end of the pupa bears a group of spine-like tubercles.
Some species, such as deer flies and the Australian March flies, are known for being extremely noisy during flight, though clegs, for example, fly quietly and bite with little warning. Tabanids are agile fliers; Hybomitra species have been observed to perform aerial manoeuvres similar to those performed by fighter jets, such as the Immelmann turn. Horseflies can lay claim to being the fastest flying insects; the male Hybomitra hinei wrighti has been recorded reaching speeds of up to when pursuing a female.
Distribution and habitat
Tabanids are found worldwide, except for the polar regions, but they are absent from some islands such as Greenland, Iceland, and Hawaii. The genera Tabanus, Chrysops, and Haematopota all occur in temperate, subtropical, and tropical locations, but Haematopota is absent from Australia and South America. They mostly occur in warm areas with suitable moist locations for breeding, but also occupy a wide range of habitats from deserts to alpine meadows. They are found from sea level to at least .
Evolution and taxonomy
The oldest records of the family come from the Early Cretaceous, with the oldest record being Eotabanoid, known from wings found in the Berriasian (145–140 million years ago) Purbeck Group of England. Although the bloodsucking habit is associated with a long proboscis, a fossil insect that has elongated mouthparts is not necessarily a bloodsucker, as it may instead have fed on nectar. The ancestral tabanids may have co-evolved with the angiosperm plants on which they fed. With a necessity for high-protein food for egg production, the diet of early tabanomorphs was probably predatory, and from this, the bloodsucking habit may have evolved. In the Santana Formation in Brazil, no mammals have been found, so the fossil tabanids found there likely fed on reptiles. Cold bloodsucking probably preceded warm bloodsucking, but some dinosaurs are postulated to have been warm-blooded and may have been early hosts for the horse-flies.
The Tabanidae are true flies and members of the insect order Diptera. With the families Athericidae, Pelecorhynchidae and Oreoleptidae, Tabanidae are classified in the superfamily Tabanoidea. Along with the Rhagionoidea, this superfamily makes up the infraorder Tabanomorpha. Tabanoid families seem to be united by the presence of a venom canal in the mandible of the larvae. Worldwide, about 4,455 species of Tabanidae have been described, over 1,300 of them in the genus Tabanus.
Tabanid identification is based mostly on adult morphological characters of the head, wing venation, and sometimes the last abdominal segment. The genitalia are very simple and do not provide clear species differentiation as in many other insect groups. In the past, most taxonomic treatments considered the family to be composed of three subfamilies: Pangoniinae (tribes Pangoniini, Philolichini, Scionini), Chrysopsinae (tribes Bouvieromyiini, Chrysopsini, Rhinomyzini), and Tabaninae (tribes Diachlorini, Haematopotini, Tabanini). Some treatments increased this to five subfamilies, adding the subfamily Adersiinae, with the single genus Adersia, and the subfamily Scepcidinae, with the two genera Braunsiomyia and Scepsis.A 2015 study by Morita et al. using nucleotide data, aimed to clarify the phylogeny of the Tabanidae and supports three subfamilies. The subfamilies Pangoniinae and Tabaninae were shown to be monophyletic. The tribes Philolichini, Chrysopsini, Rhinomyzini, and Haematopotini were found to be monophyletic, with the Scionini also being monophyletic apart from the difficult-to-place genus Goniops. Adersia was recovered within the Pangoniini as were the genera previously placed in the Scepcidinae, and Mycteromyia and Goniops were recovered within the Chrysopsini.
The Tabaninae lack ocelli (simple eyes) and have no spurs on the tips of their hind tibiae. In the Pangoniinae, ocelli are present and the antennal flagellum (whip-like structure) usually has eight annuli (or rings). In the Chrysopsinae, the antennal flagellum has a basal plate and the flagellum has four annuli. Females have a shining callus on the frons (front of the head between the eyes). The Adersiinae have a divided tergite on the ninth abdominal segment, and the Scepsidinae have highly reduced mouthparts. Members of the family Pelecorhynchidae were initially included in the Tabanidae and moved into the Rhagionidae before being elevated into a separate family. The infraorder Tabanomorpha shares the blood-feeding habit as a common primitive characteristic, although this is restricted to the female.
Two well-known genera are the common horse-flies, Tabanus, named by Swedish taxonomist Carl Linnaeus in 1758, and the deer flies, Chrysops, named by the German entomologist Johann Wilhelm Meigen in 1802. Meigen did pioneering research on flies and was the author of Die Fliegen (The Flies); he gave the name Haematopota, meaning "blood-drinker", to another common genus of horse-flies.
Biology
Diet and biting behavior
Adult tabanids feed on nectar and plant exudates, and some are important pollinators of certain specialised flowers; several South African and Asian species in the Pangoniinae have spectacularly long probosces adapted for the extraction of nectar from flowers with long, narrow corolla tubes, such as Lapeirousia, and certain Pelargonium.
Both males and females engage in nectar-feeding, but females of most species are anautogenous, meaning they require a blood meal before they are able to reproduce effectively. To obtain the blood, the females, but not the males, bite animals, including humans. The female needs about six days to fully digest her blood meal and after that, she needs to find another host. The flies seem to be attracted to a potential victim by its movement, warmth, and surface texture, and by the carbon dioxide it breathes out. The flies mainly choose large mammals such as cattle, horses, camels, and deer, but few are species-specific. They have also been observed feeding on smaller mammals, birds, lizards, and turtles, and even on animals that have recently died. Unlike many biting insects such as mosquitoes, whose biting mechanism and saliva allow a bite to not be noticed by the host at the time, bites from tabanids are immediately irritating to the victim, so that they are often brushed off, and may have to visit multiple hosts to obtain sufficient blood. This behaviour means that they may carry disease-causing organisms from one host to another. Large animals and livestock are generally powerless to dislodge the fly, so there is no selective advantage for the flies to evolve a less immediately painful bite.
The mouthparts of females are of the usual dipteran form and consist of a bundle of six chitinous stylets that, together with a fold of the fleshy labium, form the proboscis. On either side of these are two maxillary palps. When the insect lands on an animal, it grips the surface with its clawed feet, the labium is retracted, the head is thrust downwards and the stylets slice into the flesh. Some of these have sawing edges and muscles can move them from side-to-side to enlarge the wound. Saliva containing anticoagulant is injected into the wound to prevent clotting. The blood that flows from the wound is lapped up by another mouthpart which functions as a sponge. Bites can be painful for a day or more; fly saliva may provoke allergic reactions such as hives and difficulty with breathing. Tabanid bites can make life outdoors unpleasant for humans, and can reduce milk output in cattle. They are attracted by polarized reflections from water, making them a particular nuisance near swimming pools. Since tabanids prefer to be in sunshine, they normally avoid shaded places such as barns, and are inactive at night.
Attack patterns vary with species; clegs fly silently and prefer to bite humans on the wrist or bare leg; large species of Tabanus buzz loudly, fly low, and bite ankles, legs, or backs of knees; Chrysops flies somewhat higher, bites the back of the neck, and has a high buzzing note. The striped hides of zebras may have evolved to reduce their attractiveness to horse-flies and tsetse flies. The closer together the stripes, the fewer flies are visually attracted; the zebra's legs have particularly fine striping, and this is the shaded part of the body that is most likely to be bitten in other, unstriped equids. More recent research by the same lead author shows that the stripes were no less attractive to tabanids, but they merely touched—and could not make a controlled landing to bite. This suggests that a function of the stripes was interfering with optic flow.
This does not preclude the possible use of stripes for other purposes such as signaling or camouflage. Another disruptive mechanism may also be in play, however: a study comparing horse-fly behaviour when approaching horses wearing either striped or check-patterned rugs, when compared with plain rugs, found that both patterns were equally effective in deterring the insects.
Reproduction
Mating often occurs in swarms, generally at landmarks such as hilltops. The season, time of day, and type of landmark used for mating swarms are specific to particular species.
Eggs are laid on stones or vegetation near water, in clusters of up to 1000, especially on emergent water plants. The eggs are white at first but darken with age. They hatch after about six days, with the emerging larvae using a special hatching spike to open the egg case. The larvae fall into the water or onto the moist ground below. Chrysops species develop in particularly wet locations, while Tabanus species prefer drier places. The larvae are legless grubs, tapering at both ends. They have small heads and 11 or 13 segments and moult six to 13 times over the course of a year or more. In temperate species, the larvae have a quiescent period during winter (diapause), while tropical species breed several times a year. In the majority of species, they are white, but in some, they are greenish or brownish, and they often have dark bands on each segment. A respiratory siphon at the hind end allows the larvae to obtain air when submerged in water. Larvae of nearly all species are carnivorous, often cannibalistic in captivity, and consume worms, insect larvae, and arthropods. The larvae may be parasitized by nematodes, flies of the families Bombyliidae and Tachinidae, and Hymenoptera in the family Pteromalidae. When fully developed, the larvae move into drier soil near the surface of the ground to pupate. In dry places a "remarkable" adaptation was discovered in the 1920s by W.A. Lamborn in Malawi (then Nyasaland). The larvae were discovered to tunnel in a spiral motion while the mud was still wet and plastic, forming a partitioned cylinder in the center of which the larva settled to pupate after closing the entrance; this adaption protects the pupae against mudcracks when the mud dries up, as a spreading crack would change direction when it hit the wall of the cylinder.
The pupae are brown and glossy, rounded at the head end, and tapering at the other end. Wing and limb buds can be seen and each abdominal segment is fringed with short spines. After about two weeks, metamorphosis is complete, the pupal case splits along the thorax, and the adult fly emerges. Males usually appear first, but when both sexes have emerged, mating takes place, courtship starting in the air and finishing on the ground. The female needs to feed on blood before depositing her egg mass.
Predators and parasites
Eggs are often attacked by tiny parasitic wasps, and the larvae are consumed by birds, as well as being paratised by tachinid flies, fungi, and nematodes. Adults are eaten by generalized predators such as birds, and some specialist predators, such as the horse guard wasp (a bembicinid wasp), also preferentially attack horse-flies, catching them to provision their nests.
As disease vectors
Tabanids are known vectors for some blood-borne bacterial, viral, protozoan, and worm diseases of mammals, such as the equine infectious anaemia virus and various species of Trypanosoma which cause diseases in animals and humans. Species of the genus Chrysops transmit the parasitic filarial worm Loa loa between humans, and tabanids are known to transmit anthrax among cattle and sheep, and tularemia between rabbits and humans.
Blood loss is a common problem in some animals when large flies are abundant. Some animals have been known to lose up to of blood in a single day to tabanid flies, a loss which can weaken or even kill them. Anecdotal reports of bites leading to fatal anaphylaxis in humans have been made, an extremely rare occurrence.
Management
Control of tabanid flies is difficult. Malaise traps are most often used to capture them, and these can be modified with the use of baits and attractants that include carbon dioxide or octenol. A dark shiny ball suspended below them that moves in the breeze can also attract them and forms a key part of a modified "Manitoba trap" that is used most often for trapping and sampling the Tabanidae. Cattle can be treated with pour-on pyrethroids which may repel the flies, and fitting them with insecticide-impregnated eartags or collars has had some success in killing the insects.
Bites
Tabanid bites can be painful to humans. Usually, a weal (raised area of skin) occurs around the site; other symptoms may include urticaria (a rash), dizziness, weakness, wheezing, and angioedema (a temporary itchy, pink or red swelling occurring around the eyes or lips). A few people experience an allergic reaction. The National Health Service of the United Kingdom recommends that the site of the bite should be washed and a cold compress applied. Scratching the wound should be avoided at all times and an antihistamine preparation can be applied. In most cases, the symptoms subside within a few hours, but if the wound becomes infected, medical advice should be sought.
In literature
{{multiple image|total_width=300|caption_align=center|direction=horizontal|align=left
|image1= EuropäischenZweiflügeligen1790TafCXCIV.jpg
|alt1=
|image2= Theater of Insects.jpg
|alt2=
| footer = {{font|size=100%|font=Sans-serif|text= Left: Johann Wilhelm Meigen's Europäischen Zweiflügeligen 1790, Plate CXCIV. Nos 7, 8 and 9 are Haematopota horse-flies, H. crassicornis, H. grandis, and H. pluvialis, respectively. Right: Thomas Muffett described the horse-fly in his 1634 book Theatre of Insects.}}
}}
In Prometheus Bound, which is attributed to the Athenian tragic playwright Aeschylus, a gadfly sent by Zeus's wife Hera pursues and torments his mistress Io, who has been transformed into a cow and is watched constantly by the hundred eyes of the herdsman Argus: "Io: Ah! Hah! Again the prick, the stab of gadfly-sting! O earth, earth, hide, the hollow shape—Argus—that evil thing—the hundred-eyed." William Shakespeare, inspired by Aeschylus, has Tom o' Bedlam in King Lear, "Whom the foul fiend hath led through fire and through flame, through ford and whirlpool, o'er bog and quagmire", driven mad by the constant pursuit. In Antony and Cleopatra, Shakespeare likens Cleopatra's hasty departure from the Actium battlefield to that of a cow chased by a gadfly: "The breeze [gadfly] upon her, like a cow in June / hoists sail and flies", where "June" may allude not only to the month but also to the goddess Juno, who torments Io, and the cow in turn may allude to Io, who is changed into a cow in Ovid's Metamorphoses.
The physician and naturalist Thomas Muffet wrote that the horse-fly "carries before him a very hard, stiff, and well-compacted sting, with which he strikes through the Oxe his hide; he is in fashion like a great Fly, and forces the beasts for fear of him only to stand up to the belly in water, or else to betake themselves to wood sides, cool shades, and places where the wind blows through." The "Blue Tail Fly" in the eponymous song was probably the mourning horsefly (Tabanus atratus''), a tabanid with a blue-black abdomen common to the southeastern United States.
Paul Muldoon’s chapbook Binge contains a poem "Clegs and Midges" which uses gadflies, real and metaphoric, "cleg" being a British term for the horse-fly.
In Norse mythology Loki took the form of a gadfly to hinder Brokkr during the manufacture of the hammer Mjölnir, weapon of Thor (Hammer of Thor).
| Biology and health sciences | Flies (Diptera) | null |
683026 | https://en.wikipedia.org/wiki/Azithromycin | Azithromycin | Azithromycin, sold under the brand names Zithromax (in oral form) and Azasite (as an eye drop), is an antibiotic medication used for the treatment of several bacterial infections. This includes middle ear infections, strep throat, pneumonia, traveler's diarrhea, and certain other intestinal infections. Along with other medications, it may also be used for malaria. It is administered by mouth, into a vein, or into the eye.
Common side effects include nausea, vomiting, diarrhea and upset stomach. An allergic reaction, such as anaphylaxis, or a type of diarrhea caused by Clostridioides difficile is possible. Azithromycin causes QT prolongation that may cause life-threatening arrhythmias such as torsades de pointes. No harm has been found with its use during pregnancy. Its safety during breastfeeding is not confirmed, but it is likely safe. Azithromycin is an azalide, a type of macrolide antibiotic. It works by decreasing the production of protein, thereby stopping bacterial growth.
Azithromycin was discovered in Yugoslavia (present day Croatia) in 1980 by the pharmaceutical company Pliva and approved for medical use in 1988. It is on the World Health Organization's List of Essential Medicines. The World Health Organization lists it as an example under "Macrolides and ketolides" in its Critically Important Antimicrobials for Human Medicine (designed to help manage antimicrobial resistance). It is available as a generic medication and is sold under many brand names worldwide. In 2022, it was the 78th most commonly prescribed medication in the United States, with more than 8million prescriptions.
Medical uses
Azithromycin is used to treat diverse infections, including:
Acute bacterial sinusitis due to H. influenzae, M. catarrhalis, or S. pneumoniae. A 1999 study found Azithromycin to be faster in resolving symptoms as compared to amoxicillin / clavulanic.
Acute otitis media caused by H. influenzae, M. catarrhalis or S. pneumoniae. A 2021 study concluded that Azithromycin was comparable to amoxicillin/clavulanate in its treatment and that it was safer and more tolerable in children.
Community-acquired pneumonia due to C. pneumoniae, H. influenzae, M. pneumoniae, or S. pneumoniae.
Genital ulcer disease (chancroid) in men due to H. ducreyi
Pharyngitis or tonsillitis caused by S. pyogenes as an alternative to first-line therapy in individuals who cannot use first-line therapy
Prevention and treatment of acute bacterial exacerbations of chronic obstructive pulmonary disease due to H. influenzae, M. catarrhalis, or S. pneumoniae. The benefits of long-term prophylaxis must be weighed on a patient-by-patient basis against the risk of cardiovascular and other adverse effects.
Trachoma due to C. trachomatis
Uncomplicated skin infections due to S. aureus, S. pyogenes, or S. agalactiae
Whooping cough caused by B. pertussis.
Scrub typhus caused by Orientia tsutsugamushi.
Bacterial susceptibility
Azithromycin has relatively broad but shallow antibacterial activity. It inhibits some Gram-positive bacteria, some Gram-negative bacteria, and many atypical bacteria.
Aerobic and facultative Gram-positive microorganisms
Staphylococcus aureus (Methicillin-sensitive only)
Streptococcus agalactiae
Streptococcus pneumoniae
Streptococcus pyogenes
Aerobic and facultative anaerobic Gram-negative microorganisms
Bordetella pertussis
Haemophilus ducreyi
Haemophilus influenzae
Legionella pneumophila
Moraxella catarrhalis
Neisseria gonorrhoeae
Anaerobic microorganisms
Peptostreptococcus species
Prevotella bivia
Other microorganisms
Chlamydia trachomatis
Chlamydophila pneumoniae
Mycoplasma genitalium
Mycoplasma pneumoniae
Ureaplasma urealyticum
Pregnancy and breastfeeding
While some studies claim that no harm has been found with use during pregnancy, more recent studies with mice during late pregnancy has shown adverse effects on embryonic testicular and neural development of prenatal azithromycin exposure (PAzE) . One recent study claims obvious fetus changes were observed under high-dose, mid-pregnancy and multi-course exposure . However, there need to be more well-controlled studies in pregnant women..
The safety of the medication during breastfeeding is unclear. It was reported that because only low levels are found in breast milk and the medication has also been used in young children, it is unlikely that breastfed infants would have adverse effects.
Airway diseases
Azithromycin has beneficial effects in the treatment of asthma. It possesses antibacterial, antiviral, and anti-inflammatory properties which contribute to its effectiveness. Asthma exacerbations can be caused by chronic neutrophilic inflammation, and azithromycin is known to reduce this type of inflammation due to its immunomodulatory properties. The recommended dosage for controlling asthma exacerbations with azithromycin is either 500 mg or 250 mg taken orally as tablets three times a week. In adults with severe asthma, low-dose azithromycin may be prescribed as an add-on treatment when standard therapies such as inhaled corticosteroids or long-acting beta2-agonists are not sufficient. Long-term use of azithromycin in patients with persistent symptomatic asthma aims to decrease the frequency of asthma exacerbations and improve their quality of life. While both its anti-inflammatory and antibacterial effects play crucial roles in treating asthma, studies suggest that responsiveness to azithromycin therapy depends on individual variations in lung bacterial burden and microbial composition, collectively referred to as the lung microbiome. The richness (diversity) of the lung microbiome has been identified as a key factor in determining the effectiveness of azithromycin treatment. Azithromycin has significant interactions with the patient's microbiome. Long-term use of azithromycin reduces the presence of H. influenzae bacteria in the airways but also increases resistance against macrolide antibiotics. The specific pharmacological mechanisms through which azithromycin interacts with the patient's microbiome remain unknown research continues to explore how changes in microbial composition influence drug efficacy and patient outcomes.
Azithromycin appears to be effective in the treatment of chronic obstructive pulmonary disease through its suppression of inflammatory processes. Azithromycin is potentially useful in sinusitis via this mechanism. Azithromycin is believed to produce its effects through suppressing certain immune responses that may contribute to inflammation of the airways.
Adverse effects
Most common adverse effects are diarrhea (5%), nausea (3%), abdominal pain (3%), and vomiting. Fewer than 1% of people stop taking the drug due to side effects. Nervousness, skin reactions, and anaphylaxis have been reported. Clostridioides difficile infection has been reported with use of azithromycin. Azithromycin does not affect the efficacy of birth control unlike some other antibiotics such as rifampin. Hearing loss has been reported.
Occasionally, people have developed cholestatic hepatitis or delirium. Accidental intravenous overdose in an infant caused severe heart block, resulting in residual encephalopathy.
In 2013, the US Food and Drug Administration (FDA) issued a warning that azithromycin "can cause abnormal changes in the electrical activity of the heart that may lead to a potentially fatal irregular heart rhythm." The FDA noted in the warning a 2012 study that found the drug may increase the risk of death, especially in those with heart problems, compared with those on other antibiotics such as amoxicillin or no antibiotic. The warning indicated people with preexisting conditions are at particular risk, such as those with abnormalities in the QT interval, low blood levels of potassium or magnesium, a slower than normal heart rate, or those who use certain drugs to treat abnormal heart rhythms. The warning mentioned that azithromycin causes QT prolongation that may cause life-threatening arrhythmias such as torsades de pointes.
Interactions
Colchicine
Azithromycin, should not be taken with colchicine as it may lead to colchicine toxicity. Symptoms of colchicine toxicity include gastrointestinal upset, fever, myalgia, pancytopenia, and organ failure.
Drugs metabolized by CYP3A4
CYP3A4 is an enzyme that metabolizes many drugs in the liver. Some drugs can inhibit CYP3A4, which means they reduce its activity and increase the blood levels of the drugs that depend on it for elimination. This can lead to adverse effects or drug-drug interactions.
Azithromycin is a member of macrolides that are a class of antibiotics with a cyclic structure with a lactone ring and sugar moieties. Macrolides can inhibit CYP3A4 by a mechanism called mechanism-based inhibition (MBI), which involves the formation of reactive metabolites that bind covalently and irreversibly to the enzyme, rendering it inactive. Mechanism-based inhibition is more serious and long-lasting than reversible inhibition, as it requires the synthesis of new enzyme molecules to restore the activity.
The degree of mechanism-based inhibition by macrolides depends on the size and structure of their lactone ring. Clarithromycin and erythromycin have a 14-membered lactone ring, which is more prone to demethylation by CYP3A4 and subsequent formation of nitrosoalkenes, the reactive metabolites that cause mechanism-based inhibition. Azithromycin, on the other hand, has a 15-membered lactone ring, which is less susceptible to demethylation and nitrosoalkene formation. Therefore, azithromycin is a weak inhibitor of CYP3A4, while clarithromycin and erythromycin are strong inhibitors which increase the area under the curve (AUC) value of co-administered drugs more than five-fold. AUC it is a measure of the drug exposure in the body over time. By inhibiting CYP3A4, macrolide antibitiotics, such as erythromycin and clarithromycin, but not azithromycin, can significantly increase the AUC of the drugs that depend on it for clearance, which can lead to higher risk of adverse effects or drug-drug interactions. Azithromycin stands apart from other macrolide antibiotics because it is a weak inhibitor of CYP3A4, and does not significantly increase AUC value of co-administered drugs.
The difference in CYP3A4 inhibition by macrolides has clinical implications, for example for people who take statins, which are cholesterol-lowering drugs that are mainly metabolized by CYP3A4. Co-administration of clarithromycin or erythromycin with statins can increase the risk of statin-induced myopathy, a condition that causes muscle pain and damage. Azithromycin, however, does not significantly affect the pharmacokinetics of statins and is considered a safer alternative than other macrolide antibiotics.
Pharmacology
Mechanism of action
Azithromycin prevents bacteria from growing by interfering with their protein synthesis. It binds to the 50S subunit of the bacterial ribosome, thus inhibiting translation of mRNA. Nucleic acid synthesis is not affected.
Pharmacokinetics
Azithromycin is an acid-stable antibiotic, so it can be taken orally with no need of protection from gastric acids. It is readily absorbed, but absorption is greater on an empty stomach. Time to peak concentration (Tmax) in adults is 2.1 to 3.2 hours for oral dosage forms. Due to its high concentration in phagocytes, azithromycin is actively transported to the site of infection. During active phagocytosis, large concentrations are released. The concentration of azithromycin in the tissues can be over 50 times higher than in plasma due to ion trapping and its high lipid solubility. Azithromycin's half-life allows a large single dose to be administered and yet maintain bacteriostatic levels in the infected tissue for several days.
Following a single dose of 500 mg, the apparent terminal elimination half-life of azithromycin is 68 hours. Biliary excretion of azithromycin, predominantly unchanged, is a major route of elimination. Over the course of a week, about 6% of the administered dose appears as an unchanged drug in urine.
History
A team of researchers at the pharmaceutical company Pliva in Zagreb, former Yugoslavia (present day Croatia) discovered azithromycin in 1980. The company Pliva patented it in 1981. In 1986, Pliva and Pfizer signed a licensing agreement, which gave Pfizer exclusive rights for the sale of azithromycin in Western Europe and the United States. Pliva put its azithromycin on the market in Central and Eastern Europe. Pfizer launched azithromycin under Pliva's license in other markets under the brand name Zithromax in 1991. Patent protection ended in 2005.
Society and culture
Available forms
Azithromycin is available as a generic medication. Azithromycin is administered in film-coated tablet, capsule, oral suspension, intravenous injection, granules for suspension in sachet, and ophthalmic solution.
Usage
In 2010, azithromycin was the most prescribed antibiotic for outpatients in the US, whereas in Sweden, where outpatient antibiotic use is a third as prevalent, macrolides are only on 3% of prescriptions. In 2017, and 2022, azithromycin was the second most prescribed antibiotic for outpatients in the United States. In 2022, it was the 78th most commonly prescribed medication in the United States, with more than 8million prescriptions.
Brand names
It is sold under many brand names worldwide including 3-Micina, A Sai Qi, Abacten, Abbott, Acex, Acithroc, Actazith, Agitro, Ai Mi Qi, Amixef, Amizin, Amovin, An Mei Qin, Ao Li Ping, Apotex, Lebanon, Aratro, Aruzilina, Arzomicin, Arzomidol, Asizith, Asomin, Astidal, Astro, Athofix, Athxin, Atizor, Atromizin, Avalon, AZ, AZA, Azacid, Azadose, Azalid, Azalide, AzaSite, Azaroth, Azath, Azatril, Azatril, Azax, Azee, Azeecor, Azeeta, Azelide, Azeltin, Azenil, Azeptin, Azerkym, Azi, Aziact, Azibact, Azibactron, Azibay, Azibect, Azibest, Azibiot, Azibiotic, Azicare, Azicin, Azicine, Aziclass, Azicom, Azicure, Azid, Azidose, Azidraw, Azifam, Azifarm, Azifast, Azifine, Aziflax, Azigen, Azigram, Azigreat, Azikare, Azilide, Azilife, Azilip, Azilup, Azimac, Azimax, Azimed, Azimepha, Azimex, Azimit, Azimix, Azimon, Azimore, Azimycin, Azimycine, Azin, Azindamon, Azinew, Azinex, Azinif, Azinil, Azintra, Aziom, Azipar, Aziped, Aziphar, Azipin, Aziplex, Azipro, Aziprome, Aziquilab, Azirace, Aziram, Aziresp, Aziride, Azirol, Azirom, Azirox, Azirute, Azirutec, Aziset, Azisis, Azison, Azissel, Aziswift, Azit, Azita, Azitam, Azitex, Azith, Azithral, Azithrin, Azithro, Azithrobeta, Azithrocin, Azithrocine, Azithromax, Azithromed, Azithromicina, Azithromycin, Azithromycine, Azithromycinum, Azithrovid, Azitic, Azitive, Azitome, Azitrac, Azitral, Azitrax, Azitredil, Azitrex, Azitrim, Azitrin, Azitrix, Azitro, Azitrobac, Azitrocin, Azitroerre, Azitrogal, Azitrolabsa, Azitrolid, Azitrolit, Azitrom, Azitromac, Azitromax, Azitromek, Azitromicin, Azitromicina, Azitromycin, Azitromycine, Azitrona, Azitropharma, Azitroteg, Azitrox, Azitsa, Azitus, Azivar, Azivirus, Aziwill, Aziwok, Azix, Azizox, Azmycin, Azo, Azobat, Azocin, Azoget, Azoheim, Azoksin, Azom, Azomac, Azomax, Azomex, Azomycin, Azomyne, Azores, Azorox, Azostar, Azot, Azoxin, Azras, Azro, Azrocin, Azrolid, Azromax, Azroplax, Azrosin, Aztin, Aztrin, Aztro, Aztrogecin, Azvig, Azycin, Azycyna, Azydrop, Azypin, Azytact, Azytan, Azyter, Azyter, Azyth, Azywell, Azza, Ba Qi, Bactaway, Bactizith, Bactrazol, Bai Ke De Rui, Batif, Bazyt, Bezanin, Bin Qi, Binozyt, BinQi, Biocine, Biozit, Bo Kang, Canbiox, Cetaxim, Charyn, Chen Yu, Cinalid, Cinetrin, Clamelle, Clearsing, Corzi, Cozith, Cronopen, Curazith, Delzosin, Demquin, Dentazit, Disithrom, Doromax, Doyle, Elzithro, Eniz, Epica, Ethrimax, Ezith, Fabodrox, Fabramicina, Feng Da Qi, Figothrom, Floctil, Flumax, Fu Qi-Hua Yuan, Fu Rui Xin, Fuqixing, Fuxin-Hai Xin Pharm, Geozif, Geozit, Gitro, Goldamycin, Gramac, Gramokil, Hemomicin, Hemomycin, I-Thro, Ilozin, Imexa, Inedol, Infectomycin, Iramicina, Itha, Jin Nuo, Jin Pai Qi, Jinbo, Jun Jie, Jun Wei Qing, Kai Qi, Kang Li Jian, Kang Qi, Katrozax, Ke Lin Da, Ke Yan Li, Koptin, Kuai Yu, L-Thro, Laz, Legar, Lg-Thral, Li Ke Si, Li Li Xing, Li Qi, Li Quan Yu, Lin Bi, Lipuqi, Lipuxin, Lizhu Qile, Loromycin, Lu Jia Kang, Luo Bei Er, Luo Qi, Maazi, Macroazi, Macromax, Macrozit, Maczith, Makromicin, Maxmor, Mazit, Mazitrom, Medimacrol, Meithromax, Mezatrin, Ming Qi Xin, Misultina, Mycinplus, Na Qi, Nadymax, Naxocina, Neblic, Nemezid, Neofarmiz, Nifostin, Nobaxin, Nokar, Novatrex, Novozithron, Novozitron, Nurox, Odaz, Odazyth, Onzet, Oranex, Oranex, Ordipha, Orobiotic, Pai Fen, Pai Fu, Paiqi, Pediazith, Pi Nis, Portex, Pu He, Pu Le Qi, Pu Yang, Qi Gu Mei, Qi Mai Xing, Qi Nuo, Qi Tai, Qi Xian, Qili, Qiyue, Rarpezit, Razimax, Razithro, Rezan, Ribotrex, Ribozith, Ricilina, Rizcin, Romax, Romycin, Rothin (Rakaposhi), Rozalid, Rozith, Ru Shuang Qi, Rui Qi, Rui Qi Lin, Rulide, Sai Jin Sha, Sai Le Xin, Sai Qi, Santroma, Selimax, Sheng Nuo Ling, Shu Luo Kang, Simpli-3, Sisocin, Sitrox, Sohomac, Stromac, Su Shuang, Sumamed, Sumamox, Tailite, Talcilina, Tanezox, Te Li Xin, Tetris, Texis, Thoraxx, Throin, Thromaxin, Tong Tai Qi Li, Topt, Toraseptol, Tremac, Trex, Tri Azit, Triamid, Tridosil, Trimelin, Tritab, Tromiatlas, Tromix, Trozamil, Trozin, Trozocina, Trulimax, Tuoqi, Udox, Ultreon, Ultreon, Vectocilina, Vinzam, Visag, Vizicin, Wei Li Qinga, Wei Lu De, Wei Zong, Weihong, Xerexomair, Xi Le Xin, Xi Mei, Xin Da Kang, Xin Pu Rui, Xithrone, Ya Rui, Yan Sha, Yanic, Yi Nuo Da, Yi Song, Yi Xina, Yin Pei Kang, Yong Qi, You Ni Ke, Yu Qi, Z-3, Z-PAK, Zady, Zaiqi, Zaret, Zarom, Zathrin, Zedbac, Zeelide, Zeemide, Zenith, Zentavion, Zetamac, Zetamax, Zeto, Zetron, Zevlen, Zibramax, Zicho, Zigilex, Zikrax, Zikti, Zimacrol, Zimax, Zimicina, Zimicine, Zindel, Zinfect, Zirom, Zisrocin, Zistic, Zit-Od, Zitab, Zitax, Zithrax, Zithrin, Zithro-Due, Zithrobest, Zithrodose, Zithrogen, Zithrokan, Zithrolide, Zithromax, Zithrome, Zithromed, Zithroplus, Zithrotel, Zithrox, Zithroxyn, Zithtec, Zitinn, Zitmac, Zitraval, Zitrax, Zitrex, Zitric, Zitrim, Zitrobid, Zitrobiotic, Zithrolect, Zitrocin, Zitrogram, Zitrolab, Zitromax, Zitroneo, Zitrotek, Ziyoazi, Zmax, Zocin, Zomax, Zotax, Zycin, and Zythrocin.
It is sold as a combination drug with cefixime as Anex-AZ, Azifine-C, Aziter-C, Brutacef-AZ, Cezee, Fixicom-AZ, Emtax-AZ, Olcefone-AZ, Starfix-AZ, Zeph-AZ, Zicin-CX, and Zifi-AZ.
It is also sold as a combination drug with nimesulide as Zitroflam; in a combination with tinidazole and fluconazole as Trivafluc, and in a combination with ambroxol as Zathrin-AX, Laz-AX and Azro-AM.
Research
Azithromycin is researched for its supposed anti-inflammatory and immunomodulatory properties, which are believed to be exhibited through its suppression of proinflammatory cytokines and enhancing the production of anti-inflammatory cytokines, which is important in dampening inflammation. Cytokines are small proteins that are secreted by immune cells and play a key role in the immune response. Studies suggest that azithromycin can decrease the release of pro-inflammatory cytokines such as TNF-alpha, IL-1β, IL-6, and IL-8 while increasing the levels of anti-inflammatory cytokine IL-10. By decreasing the number of pro-inflammatory cytokines, azithromycin probably controls potential tissue damage during inflammation. These effects are believed to be due to azithromycin's ability to suppress a transcription factor called nuclear factor-kappa B (NF-κB) resulting in blockade of inflammatory response pathways downstream from NF-κB activation leading to decreased chemokine receptor CXCR4 signaling causing reduced inflammation. Despite the efficiency of treating rosacea with azithromycin, the exact mechanism of why azithromycin is effective in treating rosacea are not completely understood. It is unclear whether its antibacterial or immunomodulatory properties or a combination of both mechanisms contribute to its efficacy. Azithromycin may prevent mast cell degranulation and thus can suppress inflammation of dorsal root ganglia through various signaling pathways such as decreased numbers of CD4+ T cells which are particularly relevant since they mediate the response of hair follicle antigens. Inflammation in rosacea is thought to be associated with increased production of reactive oxygen species (ROS) by inflammatory cells. The ability of azithromycin to decrease ROS production can help reduce oxidative stress and inflammation, but this remains speculation.
The therapeutic role of azithromycin has been explored in various diseases such as cystic fibrosis exacerbation, burn injury-induced lung injury, asthma, chronic obstructive pulmonary disease, and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in COVID-19 infection. Despite early evidence showing azithromycin slowed down coronavirus multiplication in laboratory settings, further research indicates it to be ineffective as a treatment for COVID-19 in humans. Azithromycin in conjunction with of chloroquine or hydroxychloroquine has been associated with deleterious outcomes in COVID-19 patients, including drug-induced QT prolongation. After a large-scale trial showed no benefit of using azithromycin in treating COVID-19, the UK's National Institute for Health and Care Excellence (NICE) updated its guidance and no longer recommends the medication for COVID-19.
Azithromycin has been studied in the treatment of chronic fatigue syndrome (CFS) and has been reported to improve or even resolve symptoms in some cases. However, these studies have been described as being of very low quality. In any case, the beneficial effects might be by eradication of chronic bacterial infections that are possibly contributing to or causing CFS or by the immunomodulatory effects of azithromycin.
Azithromycin therapy in cystic fibrosis patients yields a modest respiratory function improvement, reduces exacerbation risk, and extends time to exacerbation up to six months; still, long-term efficacy data is a subject of ongoing research. Potential benefits of azithromycin therapy is azithromycin's good safety profile, minimal treatment burden, and cost-effectiveness, but the drawbacks are gastrointestinal side effects with weekly dosing, which are ameliorated by a split dose regimen. The potential role of azithromycin in inhibiting the autophagic destruction of non-tuberculous mycobacteria (NTM) within macrophages has garnered significant attention. This mechanism may contribute to the observed correlation between long-term macrolide monotherapy and an increased risk of NTM infection and the emergence of macrolide-resistant strains. Azithromycin's interference with autophagy could potentially predispose patients with cystic fibrosis to mycobacterial infections. Despite repeated refutations of a direct association between azithromycin use and NTM infection, there remains a high level of concern regarding the potential for the development of NTM strains resistant to macrolides.
Azithromycin has been shown to be an effective preventive measure against many postpartum infections in mothers following planned vaginal births; still, its impact on neonatal outcomes remains inconclusive and is the subject of ongoing research.
| Biology and health sciences | Antibiotics | Health |
683065 | https://en.wikipedia.org/wiki/Trisodium%20citrate | Trisodium citrate | Trisodium citrate is a chemical compound with the molecular formula Na3C6H5O7. It is sometimes referred to simply as "sodium citrate", though sodium citrate can refer to any of the three sodium salts of citric acid. It possesses a saline, mildly tart taste, and is a mild alkali.
Uses
Foods
Sodium citrate is chiefly used as a food additive, usually for flavor or as a preservative. Its E number is E331. Sodium citrate is employed as a flavoring agent in certain varieties of club soda. It is common as an ingredient in bratwurst, and is also used in commercial ready-to-drink beverages and drink mixes, contributing a tart flavor. It is found in gelatin mix, ice cream, yogurt, jams, sweets, milk powder, processed cheeses, carbonated beverages, wine, and butter chicken, amongst others. Because the elements in Na3C6H5O7 spell "Na C H O", "Nacho Cheese" is a convenient mnemonic for trisodium citrate's chemical formula.
Sodium citrate can be used as an emulsifying stabilizer when making cheese. It allows the cheese to melt without becoming greasy by stopping the fats from separating.
Buffering
As a conjugate base of a weak acid, citrate can perform as a buffering agent or acidity regulator, resisting changes in pH. It is used to control acidity in some substances, such as gelatin desserts. It can be found in the milk minicontainers used with coffee machines. The compound is the product of antacids, such as Alka-Seltzer, when they are dissolved in water. The pH range of a solution of 5 g/100 ml water at 25 °C is 7.5 to 9.0. It is added to many commercially packaged dairy products to control the pH impact of the gastrointestinal system of humans, mainly in processed products such as cheese and yogurt, although it also has beneficial effects on the physical gel microstructure.
Chemistry
Sodium citrate is a component in Benedict's qualitative solution, often used in organic analysis to detect the presence of reducing sugars such as glucose.
Medicine
In 1914, the Belgian doctor Albert Hustin and the Argentine physician and researcher Luis Agote successfully used sodium citrate as an anticoagulant in blood transfusions, with Richard Lewisohn determining its correct concentration in 1915. It continues to be used in blood-collection tubes and for the preservation of blood in blood banks. The citrate ion chelates calcium ions in the blood by forming calcium citrate complexes, disrupting the blood clotting mechanism. Recently, trisodium citrate has also been used as a locking agent in vascath and haemodialysis lines instead of heparin due to its lower risk of systemic anticoagulation.
In 2003, Ööpik et al. showed the use of sodium citrate (0.5 g/kg body weight) improved running performance over 5 km by 30 seconds.
Sodium citrate is used to relieve discomfort in urinary-tract infections, such as cystitis, to reduce the acidosis seen in distal renal tubular acidosis, and can also be used as an osmotic laxative. It is a major component of the WHO oral rehydration solution.
It is used as an antacid, especially prior to anaesthesia, for caesarian section procedures to reduce the risks associated with the aspiration of gastric contents.
Boiler descaling
Sodium citrate is a particularly effective agent for removal of carbonate scale from boilers without removing them from operation and for cleaning automobile radiators.
| Physical sciences | Citrates | Chemistry |
683071 | https://en.wikipedia.org/wiki/Calcium%20citrate | Calcium citrate | Calcium citrate is the calcium salt of citric acid. It is commonly used as a food additive (E333), usually as a preservative, but sometimes for flavor. In this sense, it is similar to sodium citrate. Calcium citrate is also found in some dietary calcium supplements (e.g. Citracal or Caltrate). Calcium makes up 24.1% of calcium citrate (anhydrous) and 21.1% of calcium citrate (tetrahydrate) by mass. The tetrahydrate occurs in nature as the mineral Earlandite.
Chemical properties
Calcium citrate is sparingly soluble in water. Needle-shaped crystals of tricalcium dicitrate tetrahydrate [Ca3(C6H5O7)2(H2O)2]·2H2O were obtained by hydrothermal synthesis. The crystal structure comprises a three-dimensional network in which eightfold coordinated Ca2+ cations are linked by citrate anions and hydrogen bonds between two non-coordinating crystal water molecules and two coordinating water molecules.
Production
Calcium citrate is an intermediate in the isolation of citric acid from the fungal fermentation process by which citric acid is produced industrially. The citric acid in the broth solution is neutralized by limewater, precipitating insoluble calcium citrate. This is then filtered off from the rest of the broth and washed to give clean calcium citrate.
3 Ca(OH)2(s) + 2 C6H8O7(l) → Ca3(C6H5O7)2(s) + 6 H2O(l)
The calcium citrate thus produced may be sold as-is, or it may be converted to citric acid using dilute sulfuric acid.
Medical uses
It's primarily sold as a food supplement of calcium.
Bioavailability
In many individuals, bioavailability of calcium citrate is found to be equal to that of the cheaper calcium carbonate (CaCO3). However, alterations to the digestive tract may change how calcium is digested and absorbed. Unlike calcium carbonate, which is basic and neutralizes stomach acid, calcium citrate has no effect on stomach acid. Calcium carbonate is harder to digest than calcium citrate, and calcium carbonate carries a risk of "acid rebound" (the stomach overcompensates by producing more acid), so individuals who are sensitive to antacids or who have difficulty producing adequate stomach acid may choose calcium citrate over calcium carbonate for supplementation.
According to a 2009 research into calcium absorption after gastric bypass surgery, calcium citrate may have improved bioavailability over calcium carbonate in Roux-en-Y gastric bypass patients who are taking calcium citrate as a dietary supplement after surgery. This is mainly due to the changes related to where calcium absorption occurs in the digestive tract of these individuals.
| Physical sciences | Citrates | Chemistry |
683322 | https://en.wikipedia.org/wiki/HEPA | HEPA | HEPA (, high efficiency particulate air) filter, also known as a high efficiency particulate arresting filter, is an efficiency standard of air filters.
Filters meeting the HEPA standard must satisfy certain levels of efficiency. Common standards require that a HEPA air filter must remove—from the air that passes through—at least 99.95% (ISO, European Standard) or 99.97% (ASME, U.S. DOE) of particles whose diameter is equal to 0.3 μm, with the filtration efficiency increasing for particle diameters both less than and greater than 0.3 μm. HEPA filters capture pollen, dirt, dust, moisture, bacteria (0.2–2.0 μm), viruses (0.02–0.3 μm), and submicron liquid aerosol (0.02–0.5 μm). Some microorganisms, for example, Aspergillus niger, Penicillium citrinum, Staphylococcus epidermidis, and Bacillus subtilis are captured by HEPA filters with photocatalytic oxidation (PCO). A HEPA filter is also able to capture some viruses and bacteria which are ≤0.3 μm. A HEPA filter is also able to capture floor dust which contains bacteroidia, clostridia, and bacilli. HEPA was commercialized in the 1950s, and the original term became a registered trademark and later a generic trademark for highly efficient filters. HEPA filters are used in applications that require contamination control, such as the manufacturing of hard disk drives, medical devices, semiconductors, nuclear, food and pharmaceutical products, as well as in hospitals, homes, and vehicles.
Mechanism
HEPA filters are composed of a mat of randomly arranged fibers. The fibers are typically composed of polypropylene or fiberglass with diameters between 0.5 and 2.0 micrometers. Most of the time, these filters are composed of tangled bundles of fine fibers. These fibers create a narrow convoluted pathway through which air passes. When the largest particles are passing through this pathway, the bundles of fibers behave like a kitchen sieve which physically blocks the particles from passing through. However, when smaller particles pass with the air, as the air twists and turns, the smaller particles cannot keep up with the motion of the air and thus they collide with the fibers. The smallest particles have very little inertia and move randomly as a result of collisions with individual air molecules (Brownian motion). Because of their movement, they end up crashing into the fibers. Key factors affecting its functions are fiber diameter, filter thickness, and face velocity, which is the measured air speed at an inlet or outlet of a heating ventilation and air conditioning (HVAC) system. Face velocity is measured in m/s and can be calculated as the volume flow rate (m3/s) divided by the face area (m2). The air space between HEPA filter fibers is typically much greater than 0.3 μm. HEPA filters in very high level for smallest particulate matter. Unlike sieves or membrane filters, where particles smaller than openings or pores can pass through, HEPA filters are designed to target a range of particle sizes. These particles are trapped (they stick to a fiber) through a combination of the following three mechanisms:
Diffusion; particles below 0.3 μm are captured by diffusion in a HEPA filter. This mechanism is a result of the collision with gas molecules by the smallest particles, especially those below 0.1 μm in diameter. The small particles are effectively blown or bounced around and collide with the filter media fibers. This behavior is similar to Brownian motion and raises the probability that a particle will be stopped by either interception or impaction; this mechanism becomes dominant at lower airflow.
Interception; particles following a line of flow in the air stream come within one radius of a fiber and adhere to it. Mid size particles are being captured by this process.
Impaction; larger particles are unable to avoid fibers by following the curving contours of the air stream and are forced to embed in one of them directly; this effect increases with diminishing fiber separation and higher air flow velocity.
Diffusion predominates below the 0.1 μm diameter particle size, whilst impaction and interception predominate above 0.4 μm. In between, near the most penetrating particle size (MPPS) 0.21 μm, both diffusion and interception are comparatively inefficient. Because this is the weakest point in the filter's performance, the HEPA specifications use the retention of particles near this size (0.3 μm) to classify the filter. However it is possible for particles smaller than the MPPS to not have filtering efficiency greater than that of the MPPS. This is due to the fact that these particles can act as nucleation sites for mostly condensation and form particles near the MPPS.
Gas filtration
HEPA filters are designed to arrest very fine particles effectively, but they do not filter out gasses and odor molecules. Circumstances requiring filtration of volatile organic compounds, chemical vapors, or cigarette, pet or flatulence odors call for the use of an activated carbon (charcoal) or other type of filter instead of or in addition to a HEPA filter. Carbon cloth filters, claimed to be many times more efficient than the granular activated carbon form at adsorption of gaseous pollutants, are known as high efficiency gas adsorption filters (HEGA) and were originally developed by the British Armed Forces as a defense against chemical warfare.
Pre-filter and HEPA filter
A HEPA bag filter can be used in conjunction with a pre-filter (usually carbon-activated) to extend the usage life of the more expensive HEPA filter. In such setup, the first stage in the filtration process is made up of a pre-filter which removes most of the larger dust, hair, PM10 and pollen particles from the air. The second stage high-quality HEPA filter removes the finer particles that escape from the pre-filter. This is common in air handling units.
Specifications
HEPA filters, as defined by the United States Department of Energy (DOE) standard adopted by most American industries, remove at least 99.97% of aerosols 0.3 micrometers (μm) in diameter. The filter's minimal resistance to airflow, or pressure drop, is usually specified around at its nominal volumetric flow rate.
The specification used in the European Union: European Standard EN 1822-1:2019, from which ISO 29463 is derived, defines several classes of filters by their retention at the given most penetrating particle size (MPPS): Efficient Particulate Air filters (EPA), High Efficiency Particulate Air filters (HEPA), and Ultra Low Particulate Air filters (ULPA). The averaged efficiency of the filter is called "overall", and the efficiency at a specific point is called "local":
| Technology | Food, water and health | null |
683327 | https://en.wikipedia.org/wiki/Total%20station | Total station | A total station or total station theodolite is an electronic/optical instrument used for surveying and building construction. It is an electronic transit theodolite integrated with electronic distance measurement (EDM) to measure both vertical and horizontal angles and the slope distance from the instrument to a particular point, and an on-board computer to collect data and perform triangulation calculations.
Robotic or motorized total stations allow the operator to control the instrument from a distance via remote control. In theory, this eliminates the need for an assistant staff member, as the operator holds the retroreflector and controls the total station from the observed point. In practice, however, an assistant surveyor is often needed when the surveying is being conducted in busy areas such as on a public carriageway or construction site. This is to prevent people from disrupting the total station as they walk past, which would necessitate resetting the tripod and re-establishing a baseline. Additionally, an assistant surveyor discourages opportunistic theft, which is not uncommon due to the value of the instrument. If all else fails, most total stations have serial numbers. The National Society of Professional Surveyors hosts a registry of stolen equipment which can be checked by institutions that service surveying equipment to prevent stolen instruments from circulating. These motorized total stations can also be used in automated setups known as "automated motorized total station".
Function
Angle measurement
Most total station instruments measure angles by means of electro-optical scanning of extremely precise digital bar-codes etched on rotating glass cylinders or discs within the instrument. The best quality total stations are capable of measuring angles within a standard deviation of 0.5 arc-seconds. Inexpensive "construction grade" total stations can generally measure angles within standard deviations of 5 or 10 arc-seconds.
Angle measurement is typically performed by the operator first occupying a known point, aiming the head of the instrument at a target or prism which exists at either another known point or along an azimuth, which is to be held as a backsight — sighting with the reticle inside the eyepiece — then holding that line as an angle of 00°00‘̣00“̣. The operator then will turn the head of the instrument at a target or feature that is to be observed as a foresight and record the AR (Angle Right) from the backsight measured by the instrument in which a horizontal angle is produced. Angular error in the instrument as well as collimation error can be mitigated in many total stations by performing a set collection. This entails witnessing any angles recorded an equal number of times in both "direct" and "reverse" modes by sighting the observed backsight and foresights with the instrument facing the targets normally as well as with the scope flipped or "plunged" 180°. The recorded sets of angles taken from each target will be averaged together and a mean angle will be generated.
Distance measurement
Measurement of distance is accomplished with a modulated infrared carrier signal, generated by a small solid-state emitter within the instrument's optical path, and reflected by a prism reflector or the object under survey. The modulation pattern in the returning signal is read and interpreted by the computer in the total station. The distance is determined by emitting and receiving multiple frequencies, and determining the integer number of wavelengths to the target for each frequency. Most total stations use purpose-built glass prism (surveying) reflectors for the EDM signal. A typical total station can measure distances up to with an accuracy of about ± 2 parts per million.
Reflectorless total stations can measure distances to any object that is reasonably light in color, up to a few hundred meters.
Coordinate measurement
The coordinates of an unknown point relative to a known coordinate can be determined using the total station as long as a direct line of sight can be established between the two points. Angles and distances are measured from the total station to points under survey, and the coordinates (X, Y, and Z; or easting, northing, and elevation) of surveyed points relative to the total station position are calculated using trigonometry and triangulation.
To determine an absolute location, a total station requires line of sight observations and can be set up over a known point or with line of sight to 2 or more points with known location, called free stationing.
For this reason, some total stations also have a global navigation satellite system (GNSS) receiver and do not require a direct line of sight to determine coordinates. However, GNSS measurements may require longer occupation periods and offer relatively poor accuracy in the vertical axis.
Data processing
Some models include internal electronic data storage to record distance, horizontal angle, and vertical angle measured, while other models are equipped to write these measurements to an external data collector, such as a hand-held computer.
When data is downloaded from a total station onto a computer, application software can be used to compute results and generate a map of the surveyed area. The newest generation of total stations can also show the map on the touch-screen of the instrument immediately after measuring the points.
Applications
Most large-scale excavation or mapping projects benefit greatly from the proficient use of total stations. They are mainly used by land surveyors and civil engineers, either to record features as in topographic surveying or to set out features (such as roads, houses or boundaries). They are used by police, crime scene investigators, private accident reconstructionists and insurance companies to take measurements of scenes. Total stations are also employed by archaeologists, offering millimeter accuracy difficult to achieve using other tools as well as flexibility in setup location. They prove crucial in recording artifact locations, architectural dimensions, and site topography.
Mining
Total stations are the primary survey instrument used in mining surveying.
A total station is used to record the absolute location of the tunnel walls, ceilings (backs), and floors, as the drifts of an underground mine are driven. The recorded data are then downloaded into a CAD program and compared to the designed layout of the tunnel.
The survey party installs control stations at regular intervals. These are small steel plugs installed in pairs in holes drilled into walls or the back. For wall stations, two plugs are installed in opposite walls, forming a line perpendicular to the drift. For back stations, two plugs are installed in the back, forming a line parallel to the drift.
A set of plugs can be used to locate the total station set up in a drift or tunnel by processing measurements to the plugs by intersection and resection.
Mechanical and electrical construction
Total stations have become the highest standard for most forms of construction layout.
They are most often used in the X and Y axes to lay out the locations of penetrations out of the underground utilities into the foundation, between floors of a structure, as well as roofing penetrations.
Because more commercial and industrial construction jobs have become centered around building information modeling (BIM), the coordinates for almost every pipe, conduit, duct and hanger support are available with digital precision. The application of communicating a virtual model to a tangible construction potentially eliminates labor costs related to moving poorly measured systems, as well as time spent laying out these systems in the midst of a full-blown construction job in progress.
Meteorology
Meteorologists also use total stations to track weather balloons for determining upper-level winds. With the average ascent rate of the weather balloon known or assumed, the change in azimuth and elevation readings provided by the total station as it tracks the weather balloon over time are used to compute the wind speed and direction at different altitudes. Additionally, the total station is used to track ceiling balloons to determine the height of cloud layers. Such upper-level wind data is often used for aviation weather forecasting and rocket launches.
Instrument manufacturers
Carl Zeiss (historical)
GeoMax, part of Hexagon AB
Hewlett-Packard (historical)
Hilti Corporation
Leica Geosystems, part of Hexagon AB
Nikon, part of Trimble
North Group LTD (historical)
Sokkia, part of Topcon
South Group
Spectra Geospatial, part of Trimble Navigation Ltd.
Stonex (company)
TI Asahi Co. Ltd, sold under the Pentax brand
Topcon
Trimble Navigation Ltd.
Wild Heerbrugg AG (historical), part of Leica Geosystems
| Technology | Surveying tools | null |
683342 | https://en.wikipedia.org/wiki/Fuse%20%28electrical%29 | Fuse (electrical) | In electronics and electrical engineering, a fuse is an electrical safety device that operates to provide overcurrent protection of an electrical circuit. Its essential component is a metal wire or strip that melts when too much current flows through it, thereby stopping or interrupting the current. It is a sacrificial device; once a fuse has operated, it is an open circuit, and must be replaced or rewired, depending on its type.
Fuses have been used as essential safety devices from the early days of electrical engineering. Today there are thousands of different fuse designs which have specific current and voltage ratings, breaking capacity, and response times, depending on the application. The time and current operating characteristics of fuses are chosen to provide adequate protection without needless interruption. Wiring regulations usually define a maximum fuse current rating for particular circuits. A fuse can be used to mitigate short circuits, overloading, mismatched loads, or device failure. When a damaged live wire makes contact with a metal case that is connected to ground, a short circuit will form and the fuse will melt.
A fuse is an automatic means of removing power from a faulty system, often abbreviated to ADS (automatic disconnection of supply). Circuit breakers can be used as an alternative to fuses, but have significantly different characteristics.
History
Louis Clément François Breguet recommended the use of reduced-section conductors to protect telegraph stations from lightning strikes; by melting, the smaller wires would protect apparatus and wiring inside the building. A variety of wire or foil fusible elements were in use to protect telegraph cables and lighting installations as early as 1864.
A fuse was patented by Thomas Edison in 1890 as part of his electric distribution system.
Construction
A fuse consists of a metal strip or wire fuse element, of small cross-section compared to the circuit conductors, mounted between a pair of electrical terminals, and (usually) enclosed by a non-combustible housing. The fuse is arranged in series to carry all the charge passing through the protected circuit. The resistance of the element generates heat due to the current flow. The size and construction of the element is (empirically) determined so that the heat produced for a normal current does not cause the element to attain a high temperature. If too high of a current flows, the element rises to a higher temperature and either directly melts, or else melts a soldered joint within the fuse, opening the circuit.
The fuse element is made of zinc, copper, silver, aluminum, or alloys among these or other various metals to provide stable and predictable characteristics. The fuse ideally would carry its rated current indefinitely, and melt quickly on a small excess. The element must not be damaged by minor harmless surges of current, and must not oxidize or change its behavior after possibly years of service.
The fuse elements may be shaped to increase heating effect. In large fuses, current may be divided between multiple strips of metal. A dual-element fuse may contain a metal strip that melts instantly on a short circuit, and also contain a low-melting solder joint that responds to long-term overload of low values compared to a short circuit. Fuse elements may be supported by steel or nichrome wires, so that no strain is placed on the element, but a spring may be included to increase the speed of parting of the element fragments.
The fuse element may be surrounded by air, or by materials intended to speed the quenching of the arc. Silica sand or non-conducting liquids may be used.
Characteristics
Rated current IN
A maximum current that the fuse can continuously conduct without interrupting the circuit.
Time vs current characteristics
The speed at which a fuse blows depends on how much current flows through it and the material of which the fuse is made. Manufacturers can provide a plot of current vs time, often plotted on logarithmic scales, to characterize the device and to allow comparison with the characteristics of protective devices upstream and downstream of the fuse.
The operating time is not a fixed interval but decreases as the current increases. Fuses are designed to have particular characteristics of operating time compared to current. A standard fuse may require twice its rated current to open in one second, a fast-blow fuse may require twice its rated current to blow in 0.1 seconds, and a slow-blow fuse may require twice its rated current for tens of seconds to blow.
Fuse selection depends on the load's characteristics. Semiconductor devices may use a fast or ultrafast fuse as semiconductor devices heat rapidly when excess current flows. The fastest blowing fuses are designed for the most sensitive electrical equipment, where even a short exposure to an overload current could be damaging. Normal fast-blow fuses are the most general purpose fuses. A time-delay fuse (also known as an anti-surge or slow-blow fuse) is designed to allow a current which is above the rated value of the fuse to flow for a short period of time without the fuse blowing. These types of fuse are used on equipment such as motors, which can draw larger than normal currents for up to several seconds while coming up to speed.
The I2t value
The I2t rating is related to the amount of energy let through by the fuse element when it clears the electrical fault. This term is normally used in short circuit conditions and the values are used to perform co-ordination studies in electrical networks. I2t parameters are provided by charts in manufacturer data sheets for each fuse family. For coordination of fuse operation with upstream or downstream devices, both melting I2t and clearing I2t are specified. The melting I2t is proportional to the amount of energy required to begin melting the fuse element. The clearing I2t is proportional to the total energy let through by the fuse when clearing a fault. The energy is mainly dependent on current and time for fuses as well as the available fault level and system voltage. Since the I2t rating of the fuse is proportional to the energy it lets through, it is a measure of the thermal damage from the heat and magnetic forces that will be produced by a fault end.
Breaking capacity
The breaking capacity is the maximum current that can safely be interrupted by the fuse. This should be higher than the prospective short-circuit current. Miniature fuses may have an interrupting rating only 10 times their rated current. Fuses for small, low-voltage, usually residential, wiring systems are commonly rated, in North American practice, to interrupt 10,000 amperes. Fuses for commercial or industrial power systems must have higher interrupting ratings, with some low-voltage current-limiting high interrupting fuses rated for 300,000 amperes. Fuses for high-voltage equipment, up to 115,000 volts, are rated by the total apparent power (megavolt-amperes, MVA) of the fault level on the circuit.
Some fuses are designated high rupture capacity (HRC) or high breaking capacity (HBC) and are usually filled with sand or a similar material.
Low-voltage high rupture capacity (HRC) fuses are used in the area of main distribution boards in low-voltage networks where there is a high prospective short circuit current. They are generally larger than screw-type fuses, and have ferrule cap or blade contacts. High rupture capacity fuses may be rated to interrupt current of 120 kA.
HRC fuses are widely used in industrial installations and are also used in the public power grid, e.g. in transformer stations, main distribution boards, or in building junction boxes and as meter fuses.
In some countries, because of the high fault current available where these fuses are used, local regulations may permit only trained personnel to change these fuses. Some varieties of HRC fuse include special handling features.
Rated voltage
The voltage rating of the fuse must be equal to or, greater than, what would become the open-circuit voltage. For example, a glass tube fuse rated at 32 volts would not reliably interrupt current from a voltage source of 120 or 230 V. If a 32 V fuse attempts to interrupt the 120 or 230 V source, an arc may result. Plasma inside the glass tube may continue to conduct current until the current diminishes to the point where the plasma becomes a non-conducting gas. Rated voltage should be higher than the maximum voltage source it would have to disconnect. Connecting fuses in series does not increase the rated voltage of the combination, nor of any one fuse.
Medium-voltage fuses rated for a few thousand volts are never used on low voltage circuits, because of their cost and because they cannot properly clear the circuit when operating at very low voltages.
Voltage drop
The manufacturer may specify the voltage drop across the fuse at rated current. There is a direct relationship between a fuse's cold resistance and its voltage drop value. Once current is applied, resistance and voltage drop of a fuse will constantly grow with the rise of its operating temperature until the fuse finally reaches thermal equilibrium. The voltage drop should be taken into account, particularly when using a fuse in low-voltage applications. Voltage drop often is not significant in more traditional wire type fuses, but can be significant in other technologies such as resettable (PPTC) type fuses.
Temperature derating
Ambient temperature will change a fuse's operational parameters. A fuse rated for 1 A at 25 °C may conduct up to 10% or 20% more current at −40 °C and may open at 80% of its rated value at 100 °C. Operating values will vary with each fuse family and are provided in manufacturer data sheets.
Markings
Most fuses are marked on the body or end caps with markings that indicate their ratings. Surface-mount technology "chip type" fuses feature few or no markings, making identification very difficult.
Similar appearing fuses may have significantly different properties, identified by their markings. Fuse markings will generally convey the following information, either explicitly as text, or else implicit with the approval agency marking for a particular type:
Current rating of the fuse.
Voltage rating of the fuse.
Time-current characteristic; i.e. fuse speed.
Approvals by national and international standards agencies.
Manufacturer/part number/series.
Interrupting rating (breaking capacity)
Packages and materials
Fuses come in a vast array of sizes and styles to serve in many applications, manufactured in standardised package layouts to make them easily interchangeable. Fuse bodies may be made of ceramic, glass, plastic, fiberglass, molded mica laminates, or molded compressed fibre depending on application and voltage class.
Cartridge (ferrule) fuses have a cylindrical body terminated with metal end caps. Some cartridge fuses are manufactured with end caps of different sizes to prevent accidental insertion of the wrong fuse rating in a holder, giving them a bottle shape.
Fuses for low voltage power circuits may have bolted blade or tag terminals which are secured by screws to a fuseholder. Some blade-type terminals are held by spring clips. Blade type fuses often require the use of a special purpose extractor tool to remove them from the fuse holder.
Renewable fuses have replaceable fuse elements, allowing the fuse body and terminals to be reused if not damaged after a fuse operation.
Fuses designed for soldering to a printed circuit board have radial or axial wire leads. Surface mount fuses have solder pads instead of leads.
High-voltage fuses of the expulsion type have fiber or glass-reinforced plastic tubes and an open end, and can have the fuse element replaced.
Semi-enclosed fuses are fuse wire carriers in which the fusible wire itself can be replaced. The exact fusing current is not as well controlled as an enclosed fuse, and it is extremely important to use the correct diameter and material when replacing the fuse wire, and for these reasons these fuses are slowly falling from favour.
These are still used in consumer units in some parts of the world, but are becoming less common.
While glass fuses have the advantage of a fuse element visible for inspection purposes, they have a low breaking capacity (interrupting rating), which generally restricts them to applications of 15 A or less at 250 VAC. Ceramic fuses have the advantage of a higher breaking capacity, facilitating their use in circuits with higher current and voltage. Filling a fuse body with sand provides additional cooling of the arc and increases the breaking capacity of the fuse. Medium-voltage fuses may have liquid-filled envelopes to assist in the extinguishing of the arc. Some types of distribution switchgear use fuse links immersed in the oil that fills the equipment.
Fuse packages may include a rejection feature such as a pin, slot, or tab, which prevents interchange of otherwise similar appearing fuses. For example, fuse holders for North American class RK fuses have a pin that prevents installation of similar-appearing class H fuses, which have a much lower breaking capacity and a solid blade terminal that lacks the slot of the RK type.
Dimensions
Fuses can be built with different sized enclosures to prevent interchange of different ratings of fuse. For example, bottle style fuses distinguish between ratings with different cap diameters. Automotive glass fuses were made in different lengths, to prevent high-rated fuses being installed in a circuit intended for a lower rating.
Special features
Glass cartridge and plug fuses allow direct inspection of the fusible element. Other fuses have other indication methods including:
Indicating pin or striker pin — extends out of the fuse cap when the element is blown.
Indicating disc — a coloured disc (flush mounted in the end cap of the fuse) falls out when the element is blown.
Element window — a small window built into the fuse body to provide visual indication of a blown element.
External trip indicator — similar function to striker pin, but can be externally attached (using clips) to a compatible fuse.
Some fuses allow a special purpose micro switch or relay unit to be fixed to the fuse body. When the fuse element blows, the indicating pin extends to activate the micro switch or relay, which, in turn, triggers an event.
Some fuses for medium-voltage applications use two or three separate barrels and two or three fuse elements in parallel.
Fuse standards
IEC 60269 fuses
The International Electrotechnical Commission publishes standard 60269 for low-voltage power fuses. The standard is in four volumes, which describe general requirements, fuses for industrial and commercial applications, fuses for residential applications, and fuses to protect semiconductor devices. The IEC standard unifies several national standards, thereby improving the interchangeability of fuses in international trade. All fuses of different technologies tested to meet IEC standards will have similar time-current characteristics, which simplifies design and maintenance.
UL 248 fuses (North America)
In the United States and Canada, low-voltage fuses to 1 kV AC rating are made in accordance with Underwriters Laboratories standard UL 248 or the harmonized Canadian Standards Association standard C22.2 No. 248. This standard applies to fuses rated 1 kV or less, AC or DC, and with breaking capacity up to 200 kA. These fuses are intended for installations following Canadian Electrical Code, Part I (CEC), or the National Electrical Code, NFPA 70 (NEC).
The standard ampere ratings for fuses (and circuit breakers) in USA/Canada are considered 15, 20, 25, 30, 35, 40, 45, 50, 60, 70, 80, 90, 100, 110, 125, 150, 175, 200, 225, 250, 300, 350, 400, 450, 500, 600, 700, 800, 1000, 1200, 1600, 2000, 2500, 3000, 4000, 5000, and 6000 amperes. Additional standard ampere ratings for fuses are 1, 3, 6, 10, and 601.
UL 248 currently has 19 "parts". UL 248-1 sets the general requirements for fuses, while the latter parts are dedicated to specific fuses sizes (ex: 248-8 for Class J, 248-10 for Class L), or for categories of fuses with unique properties (ex: 248-13 for semiconductor fuses, 248-19 for photovoltaic fuses). The general requirements (248-1) apply except as modified by the supplemental part (240-x). For example, UL 248-19 allows photovoltaic fuses to be rated up to 1500 volts, DC, versus 1000 volts under the general requirements.
IEC and UL nomenclature varies slightly. IEC standards refer to a "fuse" as the assembly of a fusible link and a fuse holder. In North American standards, the fuse is the replaceable portion of the assembly, and a fuse link would be a bare metal element for installation in a fuse.
Automotive fuses
Automotive fuses are used to protect the wiring and electrical equipment for vehicles. There are several different types of automotive fuses and their usage is dependent upon the specific application, voltage, and current demands of the electrical circuit. Automotive fuses can be mounted in fuse blocks, inline fuse holders, or fuse clips. Some automotive fuses are occasionally used in non-automotive electrical applications. Standards for automotive fuses are published by SAE International (formerly known as the Society of Automotive Engineers).
Automotive fuses can be classified into four distinct categories:
Blade fuses
Glass tube or Bosch type
Fusible links
Fuse limiters
Most automotive fuses rated at 32 volts are used on circuits rated 24 volts DC and below. Some vehicles use a dual 12/42 V DC electrical system that will require a fuse rated at 58 V DC.
High voltage fuses
Fuses are used on power systems up to 115,000 volts AC. High-voltage fuses are used to protect instrument transformers used for electricity metering, or for small power transformers where the expense of a circuit breaker is not warranted. A circuit breaker at 115 kV may cost up to five times as much as a set of power fuses, so the resulting saving can be tens of thousands of dollars.
In medium-voltage distribution systems, a power fuse may be used to protect a transformer serving 1–3 houses. Pole-mounted distribution transformers are nearly always protected by a fusible cutout, which can have the fuse element replaced using live-line maintenance tools.
Medium-voltage fuses are also used to protect motors, capacitor banks and transformers and may be mounted in metal enclosed switchgear, or (rarely in new designs) on open switchboards.
Expulsion fuses
Large power fuses use fusible elements made of silver, copper or tin to provide stable and predictable performance. High voltage expulsion fuses surround the fusible link with gas-evolving substances, such as boric acid. When the fuse blows, heat from the arc causes the boric acid to evolve large volumes of gases. The associated high pressure (often greater than 100 atmospheres) and cooling gases rapidly quench the resulting arc. The hot gases are then explosively expelled out of the end(s) of the fuse. Such fuses can only be used outdoors.
These type of fuses may have an impact pin to operate a switch mechanism, so that all three phases are interrupted if any one fuse blows.
High-power fuse means that these fuses can interrupt several kiloamperes. Some manufacturers have tested their fuses for up to 63 kA short-circuit current.
Comparison with circuit breakers
Fuses have the advantages of often being less costly and simpler than a circuit breaker for similar ratings. The blown fuse must be replaced with a new device which is less convenient than simply resetting a breaker and therefore likely to discourage people from ignoring faults. On the other hand, replacing a fuse without isolating the circuit first (most building wiring designs do not provide individual isolation switches for each fuse) can be dangerous in itself, particularly if the fault is a short circuit.
In terms of protection response time, fuses tend to isolate faults more quickly (depending on their operating time) than circuit breakers. A fuse can clear a fault within a quarter cycle of the fault current, while a circuit breaker may take around half to one cycle to clear the fault. The response time of a fuse can be as fast as 0.002 seconds, whereas a circuit breaker typically responds in the range of 0.02 to 0.05 seconds.
High rupturing capacity fuses can be rated to safely interrupt up to 300,000 amperes at 600 V AC. Special current-limiting fuses are applied ahead of some molded-case breakers to protect the breakers in low-voltage power circuits with high short-circuit levels.
Current-limiting fuses operate so quickly that they limit the total "let-through" energy that passes into the circuit, helping to protect downstream equipment from damage. These fuses open in less than one cycle of the AC power frequency; circuit breakers cannot match this speed.
Some types of circuit breakers must be maintained on a regular basis to ensure their mechanical operation during an interruption. This is not the case with fuses, which rely on melting processes where no mechanical operation is required for the fuse to operate under fault conditions.
In a multi-phase power circuit, if only one fuse opens, the remaining phases will have higher than normal currents, and unbalanced voltages, with possible damage to motors. Fuses only sense overcurrent, or to a degree, over-temperature, and cannot usually be used independently with protective relaying to provide more advanced protective functions, for example, ground fault detection.
Some manufacturers of medium-voltage distribution fuses combine the overcurrent protection characteristics of the fusible element with the flexibility of relay protection by adding a pyrotechnic device to the fuse operated by external protective relays.
For domestic applications, Miniature circuit breakers (MCB) are widely used as an alternative to fuses. Their rated current depend on the load current of the equipment to be protected and the ambient operational temperature. They are available in the following ratings: 6A, 10A, 16A, 20A, 25A, 32A, 45A, 50A, 63A, 80A, 100A, 125A.
Fuse boxes
United Kingdom
In the UK, older electrical consumer units (also called fuse boxes) are fitted either with semi-enclosed (rewirable) fuses or cartridge fuses (Fuse wire is commonly supplied to consumers as short lengths of 5 A-, 15 A- and 30 A-rated wire wound on a piece of cardboard.) Modern consumer units usually contain miniature circuit breakers (MCBs) instead of fuses, though cartridge fuses are sometimes still used, as in some applications MCBs are prone to nuisance tripping.
Renewable fuses (rewirable or cartridge) allow user replacement, but this can be hazardous as it is easy to put a higher-rated or double fuse element (link or wire) into the holder (overfusing), or simply fitting it with copper wire or even a totally different type of conducting object (coins, hairpins, paper clips, nails, etc.) to the existing carrier. One form of fuse box abuse was to put a penny in the socket, which defeated overcurrent protection and resulted in a dangerous condition. Such tampering will not be visible without full inspection of the fuse. Fuse wire was never used in North America for this reason, although renewable fuses continue to be made for distribution boards.
The Wylex standard consumer unit was very popular in the United Kingdom until the wiring regulations started demanding residual-current devices (RCDs) for sockets that could feasibly supply equipment outside the equipotential zone. The design does not allow for fitting of RCDs or RCBOs. Some Wylex standard models were made with an RCD instead of the main switch, but (for consumer units supplying the entire installation) this is no longer compliant with the wiring regulations as alarm systems should not be RCD-protected. There are two styles of fuse base that can be screwed into these units: one designed for rewirable fusewire carriers and one designed for cartridge fuse carriers. Over the years MCBs have been made for both styles of base. In both cases, higher rated carriers had wider pins, so a carrier couldn't be changed for a higher rated one without also changing the base. Cartridge fuse carriers are also now available for DIN-rail enclosures.
North America
In North America, fuses were used in buildings wired before 1960. These Edison base fuses would screw into a fuse socket similar to Edison-base incandescent lamps. Ratings were 5, 10, 15, 20, 25, and 30 amperes. To prevent installation of fuses with an excessive current rating, later fuse boxes included rejection features in the fuse-holder socket, commonly known as Rejection Base (Type S fuses) which have smaller diameters that vary depending on the rating of the fuse. This means that fuses can only be replaced by the preset (Type S) fuse rating. This is a North American, tri-national standard (UL 4248–11; CAN/CSA-C22.2 NO. 4248.11-07 (R2012); and, NMX-J-009/4248/11-ANCE). Existing Edison fuse boards can easily be converted to only accept Rejection Base (Type S) fuses, by screwing-in a tamper-proof adapter. This adapter screws into the existing Edison fuse holder, and has a smaller diameter threaded hole to accept the designated Type S rated fuse.
Some companies manufacture resettable miniature thermal circuit breakers, which screw into a fuse socket. Some installations use these Edison-base circuit breakers. However, any such breaker sold today does have one flaw. It may be installed in a circuit-breaker box with a door. If so, if the door is closed, the door may hold down the breaker's reset button. While in this state, the breaker is effectively useless: it does not provide any overcurrent protection.
In the 1950s, fuses in new residential or industrial construction for branch circuit protection were superseded by low voltage circuit breakers.
Fuses are widely used for protection of electric motor circuits; for small overloads, the motor protection circuit will open the controlling contactor automatically, and the fuse will only operate for short circuits or extreme overload.
Coordination of fuses in series
Where several fuses are connected in series at the various levels of a power distribution system, it is desirable to blow (clear) only the fuse (or other overcurrent device) electrically closest to the fault. This process is called "coordination" and may require the time-current characteristics of two fuses to be plotted on a common current basis. Fuses are selected so that the minor branch fuse disconnects its circuit well before the supplying, feeder fuse starts to melt. In this way, only the faulty circuit is interrupted with minimal disturbance to other circuits fed by a common supplying fuse.
Where the fuses in a system are of similar types, simple rule-of-thumb ratios between ratings of the fuse closest to the load and the next fuse towards the source can be used.
Other circuit protectors
Resettable fuses
So-called self-resetting fuses use a thermoplastic conductive element known as a polymeric positive temperature coefficient (PPTC) thermistor that impedes the circuit during an overcurrent condition (by increasing device resistance). The PPTC thermistor is self-resetting in that when current is removed, the device will cool and revert to low resistance. These devices are often used in aerospace/nuclear applications where replacement is difficult, or on a computer motherboard so that a shorted mouse or keyboard does not cause motherboard damage.
Thermal fuses
A thermal fuse is often found in consumer equipment such as coffee makers, hair dryers or transformers powering small consumer electronics devices. They contain a fusible, temperature-sensitive composition which holds a spring contact mechanism normally closed. When the surrounding temperature gets too high, the composition melts and allows the spring contact mechanism to break the circuit. The device can be used to prevent a fire in a hair dryer for example, by cutting off the power supply to the heater elements when the air flow is interrupted (e.g., the blower motor stops or the air intake becomes accidentally blocked). Thermal fuses are a 'one shot', non-resettable device which must be replaced once they have been activated (blown).
Cable limiter
A cable limiter is similar to a fuse but is intended only for protection of low voltage power cables. It is used, for example, in networks where multiple cables may be used in parallel. It is not intended to provide overload protection, but instead protects a cable that is exposed to a short circuit. The characteristics of the limiter are matched to the size of cable so that the limiter clears a fault before the cable insulation is damaged.
Unicode symbol
The Unicode character for the fuse's schematic symbol, found in the Miscellaneous Technical block, is (⏛).
| Technology | Components | null |
683368 | https://en.wikipedia.org/wiki/Young%20tableau | Young tableau | In mathematics, a Young tableau (; plural: tableaux) is a combinatorial object useful in representation theory and Schubert calculus. It provides a convenient way to describe the group representations of the symmetric and general linear groups and to study their properties.
Young tableaux were introduced by Alfred Young, a mathematician at Cambridge University, in 1900. They were then applied to the study of the symmetric group by Georg Frobenius in 1903. Their theory was further developed by many mathematicians, including Percy MacMahon, W. V. D. Hodge, G. de B. Robinson, Gian-Carlo Rota, Alain Lascoux, Marcel-Paul Schützenberger and Richard P. Stanley.
Definitions
Note: this article uses the English convention for displaying Young diagrams and tableaux.
Diagrams
A Young diagram (also called a Ferrers diagram, particularly when represented using dots) is a finite collection of boxes, or cells, arranged in left-justified rows, with the row lengths in non-increasing order. Listing the number of boxes in each row gives a partition of a non-negative integer , the total number of boxes of the diagram. The Young diagram is said to be of shape , and it carries the same information as that partition. Containment of one Young diagram in another defines a partial ordering on the set of all partitions, which is in fact a lattice structure, known as Young's lattice. Listing the number of boxes of a Young diagram in each column gives another partition, the conjugate or transpose partition of ; one obtains a Young diagram of that shape by reflecting the original diagram along its main diagonal.
There is almost universal agreement that in labeling boxes of Young diagrams by pairs of integers, the first index selects the row of the diagram, and the second index selects the box within the row. Nevertheless, two distinct conventions exist to display these diagrams, and consequently tableaux: the first places each row below the previous one, the second stacks each row on top of the previous one. Since the former convention is mainly used by Anglophones while the latter is often preferred by Francophones, it is customary to refer to these conventions respectively as the English notation and the French notation; for instance, in his book on symmetric functions, Macdonald advises readers preferring the French convention to "read this book upside down in a mirror" (Macdonald 1979, p. 2). This nomenclature probably started out as jocular. The English notation corresponds to the one universally used for matrices, while the French notation is closer to the convention of Cartesian coordinates; however, French notation differs from that convention by placing the vertical coordinate first. The figure on the right shows, using the English notation, the Young diagram corresponding to the partition (5, 4, 1) of the number 10. The conjugate partition, measuring the column lengths, is (3, 2, 2, 2, 1).
Arm and leg length
In many applications, for example when defining Jack functions, it is convenient to define the arm length aλ(s) of a box s as the number of boxes to the right of s in the diagram λ in English notation. Similarly, the leg length lλ(s) is the number of boxes below s. The hook length of a box s is the number of boxes to the right of s or below s in English notation, including the box s itself; in other words, the hook length is aλ(s) + lλ(s) + 1.
Tableaux
A Young tableau is obtained by filling in the boxes of the Young diagram with symbols taken from some alphabet, which is usually required to be a totally ordered set. Originally that alphabet was a set of indexed variables , , ..., but now one usually uses a set of numbers for brevity. In their original application to representations of the symmetric group, Young tableaux have distinct entries, arbitrarily assigned to boxes of the diagram. A tableau is called standard if the entries in each row and each column are increasing. The number of distinct standard Young tableaux on entries is given by the involution numbers
1, 1, 2, 4, 10, 26, 76, 232, 764, 2620, 9496, ... .
In other applications, it is natural to allow the same number to appear more than once (or not at all) in a tableau. A tableau is called semistandard, or column strict, if the entries weakly increase along each row and strictly increase down each column. Recording the number of times each number appears in a tableau gives a sequence known as the weight of the tableau. Thus the standard Young tableaux are precisely the semistandard tableaux of weight (1,1,...,1), which requires every integer up to to occur exactly once.
In a standard Young tableau, the integer is a descent if appears in a row strictly below . The sum of the descents is called the major index of the tableau.
Variations
There are several variations of this definition: for example, in a row-strict tableau the entries strictly increase along the rows and weakly increase down the columns. Also, tableaux with decreasing entries have been considered, notably, in the theory of plane partitions. There are also generalizations such as domino tableaux or ribbon tableaux, in which several boxes may be grouped together before assigning entries to them.
Skew tableaux
A skew shape is a pair of partitions (, ) such that the Young diagram of contains the Young diagram of ; it is denoted by . If and , then the containment of diagrams means that for all . The skew diagram of a skew shape is the set-theoretic difference of the Young diagrams of and : the set of squares that belong to the diagram of but not to that of . A skew tableau of shape is obtained by filling the squares of the corresponding skew diagram; such a tableau is semistandard if entries increase weakly along each row, and increase strictly down each column, and it is standard if moreover all numbers from 1 to the number of squares of the skew diagram occur exactly once. While the map from partitions to their Young diagrams is injective, this is not the case for the map from skew shapes to skew diagrams; therefore the shape of a skew diagram cannot always be determined from the set of filled squares only. Although many properties of skew tableaux only depend on the filled squares, some operations defined on them do require explicit knowledge of and , so it is important that skew tableaux do record this information: two distinct skew tableaux may differ only in their shape, while they occupy the same set of squares, each filled with the same entries. Young tableaux can be identified with skew tableaux in which is the empty partition (0) (the unique partition of 0).
Any skew semistandard tableau of shape with positive integer entries gives rise to a sequence of partitions (or Young diagrams), by starting with , and taking for the partition places further in the sequence the one whose diagram is obtained from that of by adding all the boxes that contain a value ≤ in ; this partition eventually becomes equal to . Any pair of successive shapes in such a sequence is a skew shape whose diagram contains at most one box in each column; such shapes are called horizontal strips. This sequence of partitions completely determines , and it is in fact possible to define (skew) semistandard tableaux as such sequences, as is done by Macdonald (Macdonald 1979, p. 4). This definition incorporates the partitions and in the data comprising the skew tableau.
Overview of applications
Young tableaux have numerous applications in combinatorics, representation theory, and algebraic geometry. Various ways of counting Young tableaux have been explored and lead to the definition of and identities for Schur functions.
Many combinatorial algorithms on tableaux are known, including Schützenberger's jeu de taquin and the Robinson–Schensted–Knuth correspondence. Lascoux and Schützenberger studied an associative product on the set of all semistandard Young tableaux, giving it the structure called the plactic monoid (French: le monoïde plaxique).
In representation theory, standard Young tableaux of size describe bases in irreducible representations of the symmetric group on letters. The standard monomial basis in a finite-dimensional irreducible representation of the general linear group are parametrized by the set of semistandard Young tableaux of a fixed shape over the alphabet {1, 2, ..., }. This has important consequences for invariant theory, starting from the work of Hodge on the homogeneous coordinate ring of the Grassmannian and further explored by Gian-Carlo Rota with collaborators, de Concini and Procesi, and Eisenbud. The Littlewood–Richardson rule describing (among other things) the decomposition of tensor products of irreducible representations of into irreducible components is formulated in terms of certain skew semistandard tableaux.
Applications to algebraic geometry center around Schubert calculus on Grassmannians and flag varieties. Certain important cohomology classes can be represented by Schubert polynomials and described in terms of Young tableaux.
Applications in representation theory
Young diagrams are in one-to-one correspondence with irreducible representations of the symmetric group over the complex numbers. They provide a convenient way of specifying the Young symmetrizers from which the irreducible representations are built. Many facts about a representation can be deduced from the corresponding diagram. Below, we describe two examples: determining the dimension of a representation and restricted representations. In both cases, we will see that some properties of a representation can be determined by using just its diagram. Young tableaux are involved in the use of the symmetric group in
quantum chemistry studies of atoms, molecules and solids.
Young diagrams also parametrize the irreducible polynomial representations of the general linear group (when they have at most nonempty rows), or the irreducible representations of the special linear group (when they have at most nonempty rows), or the irreducible complex representations of the special unitary group (again when they have at most nonempty rows). In these cases semistandard tableaux with entries up to play a central role, rather than standard tableaux; in particular it is the number of those tableaux that determines the dimension of the representation.
Dimension of a representation
The dimension of the irreducible representation of the symmetric group corresponding to a partition of is equal to the number of different standard Young tableaux that can be obtained from the diagram of the representation. This number can be calculated by the hook length formula.
A hook length of a box in Young diagram of shape is the number of boxes that are in the same row to the right of it plus those boxes in the same column below it, plus one (for the box itself). By the hook-length formula, the dimension of an irreducible representation is divided by the product of the hook lengths of all boxes in the diagram of the representation:
The figure on the right shows hook-lengths for all boxes in the diagram of the partition 10 = 5 + 4 + 1. Thus
Similarly, the dimension of the irreducible representation of corresponding to the partition λ of n (with at most r parts) is the number of semistandard Young tableaux of shape λ (containing only the entries from 1 to r), which is given by the hook-length formula:
where the index i gives the row and j the column of a box. For instance, for the partition (5,4,1) we get as dimension of the corresponding irreducible representation of (traversing the boxes by rows):
Restricted representations
A representation of the symmetric group on elements, is also a representation of the symmetric group on elements, . However, an irreducible representation of may not be irreducible for . Instead, it may be a direct sum of several representations that are irreducible for . These representations are then called the factors of the restricted representation (see also induced representation).
The question of determining this decomposition of the restricted representation of a given irreducible representation of Sn, corresponding to a partition of , is answered as follows. One forms the set of all Young diagrams that can be obtained from the diagram of shape by removing just one box (which must be at the end both of its row and of its column); the restricted representation then decomposes as a direct sum of the irreducible representations of corresponding to those diagrams, each occurring exactly once in the sum.
| Mathematics | Sums and products | null |
683942 | https://en.wikipedia.org/wiki/Horsehead%20Nebula | Horsehead Nebula | The Horsehead Nebula (also known as Barnard 33 or B33) is a small dark nebula in the constellation Orion. The nebula is located just to the south of Alnitak, the easternmost star of Orion's Belt, and is part of the much larger Orion molecular cloud complex. It appears within the southern region of the dense dust cloud known as Lynds 1630, along the edge of the much larger, active star-forming H II region called IC 434.
The Horsehead Nebula is approximately 422 parsecs or 1,375 light-years from Earth. It is one of the most identifiable nebulae because of its resemblance to a horse's head.
Using NASA's James Webb Space Telescope, astronomers have captured the nebula's "mane" in unprecedented detail, revealing the complexity of the photodissociation region where ultraviolet light interacts with gas and dust.
History
The nebula was discovered by Scottish astronomer Williamina Fleming in 1888 on a photographic plate taken at the Harvard College Observatory. One of the first descriptions was made by E. E. Barnard, describing it as: "Dark mass, diam. 4′, on nebulous strip extending south from ζ Orionis", cataloguing the dark nebula as Barnard 33.
Structure
The dark cloud of dust and gas is a region in the Orion molecular cloud complex, where star formation is taking place. It is located in the constellation of Orion, which is prominent in the winter evening sky in the Northern Hemisphere and the summer evening sky in the Southern Hemisphere.
Colour images reveal a red colour that originates from ionised hydrogen gas (Hα) predominantly behind the nebula, and caused by the nearby bright star Sigma Orionis. Magnetic fields channel the gases, leaving the nebula into streams, shown as foreground streaks against the background glow. A glowing strip of hydrogen gas marks the edge of the enormous cloud, and the densities of nearby stars are noticeably different on either side.
Heavy concentrations of dust in the Horsehead Nebula region and neighbouring Orion Nebula are localized into interstellar clouds, resulting in alternating sections of nearly complete opacity and transparency. The darkness of the Horsehead is caused mostly by thick dust blocking the light of stars behind it. The lower part of the Horsehead's neck casts a shadow to the left. The visible dark nebula emerging from the gaseous complex is an active site of the formation of "low-mass" stars. Bright spots in the Horsehead Nebula's base are young stars just in the process of forming.
| Physical sciences | Notable nebulae | null |
684489 | https://en.wikipedia.org/wiki/Toughness | Toughness | In materials science and metallurgy, toughness is the ability of a material to absorb energy and plastically deform without fracturing. Toughness is the strength with which the material opposes rupture. One definition of material toughness is the amount of energy per unit volume that a material can absorb before rupturing. This measure of toughness is different from that used for fracture toughness, which describes the capacity of materials to resist fracture.
Toughness requires a balance of strength and ductility.
Toughness and strength
Toughness is related to the area under the stress–strain curve. In order to be tough, a material must be both strong and ductile. For example, brittle materials (like ceramics) that are strong but with limited ductility are not tough; conversely, very ductile materials with low strengths are also not tough. To be tough, a material should withstand both high stresses and high strains. Generally speaking, strength indicates how much force the material can support, while toughness indicates how much energy a material can absorb before rupturing.
Mathematical definition
Toughness can be determined by integrating the stress-strain curve. It is the energy of mechanical deformation per unit volume prior to fracture. The explicit mathematical description is:
where
is strain
is the strain upon failure
is stress
If the upper limit of integration up to the yield point is restricted, the energy absorbed per unit volume is known as the modulus of resilience. Mathematically, the modulus of resilience can be expressed by the product of the square of the yield stress divided by two times the Young's modulus of elasticity. That is,
Toughness tests
The toughness of a material can be measured using a small specimen of that material. A typical testing machine uses a pendulum to deform a notched specimen of defined cross-section. The height from which the pendulum fell, minus the height to which it rose after deforming the specimen, multiplied by the weight of the pendulum, is a measure of the energy absorbed by the specimen as it was deformed during the impact with the pendulum. The Charpy and Izod notched impact strength tests are typical ASTM tests used to determine toughness.
Unit of toughness
Tensile toughness (or deformation energy, UT) is measured in units of joule per cubic metre (J·m−3), or equivalently newton-metre per cubic metre (N·m·m−3), in the SI system and inch-pound-force per cubic inch (in·lbf·in−3) in US customary units:
1.00 N·m.m−3 ≃ in·lbf·in−3
1.00 in·lbf·in−3 ≃ 6.89 kN·m.m−3.
In the SI system, the unit of tensile toughness can be easily calculated by using area underneath the stress–strain (σ–ε) curve, which gives tensile toughness value, as given below:
UT = Area underneath the stress–strain (σ–ε) curve = σ × ε
UT [=] F/A × ΔL/L = (N·m−2)·(unitless)
UT [=] N·m·m−3
UT [=] J·m−3
Toughest material
An alloy made of almost equal amounts of chromium, cobalt and nickel (CrCoNi) is the toughest material discovered thus far. It resists fracturing even at incredibly cold temperatures close to absolute zero. It is being considered as a material used in building spacecraft.
| Physical sciences | Solid mechanics | Physics |
685179 | https://en.wikipedia.org/wiki/Schwinger%27s%20quantum%20action%20principle | Schwinger's quantum action principle | The Schwinger's quantum action principle is a variational approach to quantum mechanics and quantum field theory. This theory was introduced by Julian Schwinger in a series of articles starting 1950.
Approach
In Schwinger's approach, the action principle is targeted towards quantum mechanics. The action becomes a quantum action, i.e. an operator, . Although it is superficially different from the path integral formulation where the action is a classical function, the modern formulation of the two formalisms are identical.
Suppose we have two states defined by the values of a complete set of commuting operators at two times. Let the early and late states be and , respectively. Suppose that there is a parameter in the Lagrangian which can be varied, usually a source for a field. The main equation of Schwinger's quantum action principle is:
where the derivative is with respect to small changes () in the parameter, and with the Lagrange operator.
In the path integral formulation, the transition amplitude is represented by the sum over all histories of , with appropriate boundary conditions representing the states and . The infinitesimal change in the amplitude is clearly given by Schwinger's formula. Conversely, starting from Schwinger's formula, it is easy to show that the fields obey canonical commutation relations and the classical equations of motion, and so have a path integral representation. Schwinger's formulation was most significant because it could treat fermionic anticommuting fields with the same formalism as bose fields, thus implicitly introducing differentiation and integration with respect to anti-commuting coordinates.
| Physical sciences | Quantum mechanics | Physics |
685311 | https://en.wikipedia.org/wiki/Experimental%20physics | Experimental physics | Experimental physics is the category of disciplines and sub-disciplines in the field of physics that are concerned with the observation of physical phenomena and experiments. Methods vary from discipline to discipline, from simple experiments and observations, such as Galileo's experiments, to more complicated ones, such as the Large Hadron Collider.
Overview
Experimental physics is a branch of physics that is concerned with data acquisition, data-acquisition methods, and the detailed conceptualization (beyond simple thought experiments) and realization of laboratory experiments. It is often contrasted with theoretical physics, which is more concerned with predicting and explaining the physical behaviour of nature than with acquiring empirical data.
Although experimental and theoretical physics are concerned with different aspects of nature, they both share the same goal of understanding it and have a symbiotic relationship. The former provides data about the universe, which can then be analyzed in order to be understood, while the latter provides explanations for the data and thus offers insight into how to better acquire data and set up experiments. Theoretical physics can also offer insight into what data is needed in order to gain a better understanding of the universe, and into what experiments to devise in order to obtain it.
The tension between experimental and theoretical aspects of physics was expressed by James Clerk Maxwell as "It is not till we attempt to bring the theoretical part of our training into contact with the practical that we begin to experience the full effect of what Faraday has called 'mental inertia' - not only the difficulty of recognizing, among the concrete objects before us, the abstract relation which we have learned from books, but the distracting pain of wrenching the mind away from the symbols to the objects, and from the objects back to the symbols. This however is the price we have to pay for new ideas."
History
As a distinct field, experimental physics was established in early modern Europe, during what is known as the Scientific Revolution, by physicists such as Galileo Galilei, Christiaan Huygens, Johannes Kepler, Blaise Pascal and Sir Isaac Newton. In the early 17th century, Galileo made extensive use of experimentation to validate physical theories, which is the key idea in the modern scientific method. Galileo formulated and successfully tested several results in dynamics, in particular the law of inertia, which later became the first law in Newton's laws of motion. In Galileo's Two New Sciences, a dialogue between the characters Simplicio and Salviati discuss the motion of a ship (as a moving frame) and how that ship's cargo is indifferent to its motion. Huygens used the motion of a boat along a Dutch canal to illustrate an early form of the conservation of momentum.
Experimental physics is considered to have reached a high point with the publication of the Philosophiae Naturalis Principia Mathematica in 1687 by Sir Isaac Newton (1643–1727). In 1687, Newton published the Principia, detailing two comprehensive and successful physical laws: Newton's laws of motion, from which arise classical mechanics; and Newton's law of universal gravitation, which describes the fundamental force of gravity. Both laws agreed well with experiment. The Principia also included several theories in fluid dynamics.
From the late 17th century onward, thermodynamics was developed by physicist and chemist Robert Boyle, Thomas Young, and many others. In 1733, Daniel Bernoulli used statistical arguments with classical mechanics to derive thermodynamic results, initiating the field of statistical mechanics. In 1798, Benjamin Thompson (Count Rumford) demonstrated the conversion of mechanical work into heat, and in 1847 James Prescott Joule stated the law of conservation of energy, in the form of heat as well as mechanical energy. Ludwig Boltzmann, in the nineteenth century, is responsible for the modern form of statistical mechanics.
Besides classical mechanics and thermodynamics, another great field of experimental inquiry within physics was the nature of electricity. Observations in the 17th and eighteenth century by scientists such as Boyle, Stephen Gray, and Benjamin Franklin created a foundation for later work. These observations also established our basic understanding of electrical charge and current. By 1808 John Dalton had discovered that atoms of different elements have different weights and proposed the modern theory of the atom.
It was Hans Christian Ørsted who first proposed the connection between electricity and magnetism after observing the deflection of a compass needle by a nearby electric current. By the early 1830s Michael Faraday had demonstrated that magnetic fields and electricity could generate each other. In 1864 James Clerk Maxwell presented to the Royal Society a set of equations that described this relationship between electricity and magnetism. Maxwell's equations also predicted correctly that light is an electromagnetic wave. Starting with astronomy, the principles of natural philosophy crystallized into fundamental laws of physics which were enunciated and improved in the succeeding centuries. By the 19th century, the sciences had segmented into multiple fields with specialized researchers and the field of physics, although logically pre-eminent, no longer could claim sole ownership of the entire field of scientific research.
Current experiments
Some examples of prominent experimental physics projects are:
Relativistic Heavy Ion Collider which collides heavy ions such as gold ions (it is the first heavy ion collider) and protons, it is located at Brookhaven National Laboratory, on Long Island, USA.
HERA, which collides electrons or positrons and protons, and is part of DESY, located in Hamburg, Germany.
LHC, or the Large Hadron Collider, which completed construction in 2008 but suffered a series of setbacks. The LHC began operations in 2008, but was shut down for maintenance until the summer of 2009. It is the world's most energetic collider upon completion, it is located at CERN, on the French-Swiss border near Geneva. The collider became fully operational March 29, 2010 a year and a half later than originally planned.
LIGO, the Laser Interferometer Gravitational-Wave Observatory, is a large-scale physics experiment and observatory to detect cosmic gravitational waves and to develop gravitational-wave observations as an astronomical tool. Currently two LIGO observatories exist: LIGO Livingston Observatory in Livingston, Louisiana, and LIGO Hanford Observatory near Richland, Washington.
JWST, or the James Webb Space Telescope, launched in 2021. It will be the successor to the Hubble Space Telescope. It will survey the sky in the infrared region. The main goals of the JWST will be in order to understand the initial stages of the universe, galaxy formation as well as the formations of stars and planets, and the origins of life.
Mississippi State Axion Search (2016 completion), Light Shining Through a Wall Experiment (LSW); EM Source: .7m, 50W continuous radio wave emitter
Method
Experimental physics uses two main methods of experimental research, controlled experiments, and natural experiments. Controlled experiments are often used in laboratories as laboratories can offer a controlled environment. Natural experiments are used, for example, in astrophysics when observing celestial objects where control of the variables in effect is impossible.
Famous experiments
Famous experiments include:
Bell test experiments
Cavendish experiment
Chicago Pile-1
Cowan–Reines neutrino experiment
Davisson–Germer experiment
Delayed-choice quantum eraser
Double-slit experiment
Eddington experiment
Eötvös experiment
Fizeau experiment
Foucault pendulum
Franck–Hertz experiment
Geiger–Marsden experiment
Gravity Probe A and Gravity Probe B
Hafele–Keating experiment
Homestake experiment
Kite experiment
Oil drop experiment
Michelson–Morley experiment
Rømer's determination of the speed of light
Stern–Gerlach experiment
Torricelli's experiment
Wu experiment
Experimental techniques
Some well-known experimental techniques include:
Crystallography
Ellipsometry
Faraday cage
Interferometry
NMR
Laser cooling
Laser spectroscopy
Raman spectroscopy
Signal processing
Spectroscopy
STM
Vacuum technique
X-ray spectroscopy
Inelastic neutron scattering
Prominent experimental physicists
Famous experimental physicists include:
Archimedes (c. 287 BC – c. 212 BC)
Alhazen (965–1039)
Al-Biruni (973–1043)
Al-Khazini (fl. 1115–1130)
Galileo Galilei (1564–1642)
Evangelista Torricelli (1608–1647)
Robert Boyle (1627–1691)
Christiaan Huygens (1629–1695)
Robert Hooke (1635–1703)
Isaac Newton (1643–1727)
Ole Rømer (1644–1710)
Stephen Gray (1666–1736)
Daniel Bernoulli (1700-1782)
Benjamin Franklin (1706–1790)
Laura Bassi (1711–1778)
Henry Cavendish (1731–1810)
Joseph Priestley (1733–1804)
William Herschel (1738–1822)
Alessandro Volta (1745–1827)
Pierre-Simon Laplace (1749–1827)
Benjamin Thompson (1753–1814)
John Dalton (1766–1844)
Thomas Young (1773–1829)
Carl Friedrich Gauss (1777–1855)
Hans Christian Ørsted (1777–1851)
Humphry Davy (1778–1829)
Augustin-Jean Fresnel (1788–1827)
Michael Faraday (1791–1867)
James Prescott Joule (1818–1889)
William Thomson, Lord Kelvin (1824–1907)
James Clerk Maxwell (1831–1879)
Ernst Mach (1838–1916)
John William Strutt (3rd Baron Rayleigh) (1842–1919)
Wilhelm Röntgen (1845–1923)
Karl Ferdinand Braun (1850–1918)
Henri Becquerel (1852–1908)
Albert Abraham Michelson (1852–1931)
Heike Kamerlingh Onnes (1853–1926)
J. J. Thomson (1856–1940)
Heinrich Hertz (1857–1894)
Jagadish Chandra Bose (1858–1937)
Pierre Curie (1859–1906)
William Henry Bragg (1862–1942)
Marie Curie (1867–1934)
Robert Andrews Millikan (1868–1953)
Ernest Rutherford (1871–1937)
Lise Meitner (1878–1968)
Max von Laue (1879–1960)
Clinton Davisson (1881–1958)
Hans Geiger (1882–1945)
C. V. Raman (1888–1970)
William Lawrence Bragg (1890–1971)
James Chadwick (1891–1974)
Arthur Compton (1892–1962)
Pyotr Kapitsa (1894–1984)
Charles Drummond Ellis (1895–1980)
John Cockcroft (1897–1967)
Patrick Blackett (Baron Blackett) (1897–1974)
Ukichiro Nakaya (1900–1962)
Enrico Fermi (1901–1954)
Ernest Lawrence (1901–1958)
Walter Houser Brattain (1902–1987)
Pavel Cherenkov (1904–1990)
Abraham Alikhanov (1904–1970)
Carl David Anderson (1905–1991)
Felix Bloch (1905–1983)
Ernst Ruska (1906–1988)
John Bardeen (1908–1991)
William Shockley (1910–1989)
Dorothy Hodgkin (1910–1994)
Luis Walter Alvarez (1911–1988)
Chien-Shiung Wu (1912–1997)
Willis Lamb (1913–2008)
Charles Hard Townes (1915–2015)
Rosalind Franklin (1920–1958)
Owen Chamberlain (1920–2006)
Nicolaas Bloembergen (1920–2017)
Vera Rubin (1928–2016)
Mildred Dresselhaus (1930–2017)
Rainer Weiss (1932–)
Carlo Rubbia (1934–)
Barry Barish (1936–)
Samar Mubarakmand (1942–)
Serge Haroche (1944–)
Anton Zeilinger (1945–)
Alain Aspect (1947–)
Gerd Binnig (1947–)
Steven Chu (1948–)
Wolfgang Ketterle (1957–)
Andre Geim (1958–)
Lene Hau (1959–)
Timelines
See the timelines below for listings of physics experiments.
Timeline of atomic and subatomic physics
Timeline of classical mechanics
Timeline of electromagnetism and classical optics
Timeline of gravitational physics and relativity
Timeline of nuclear fusion
Timeline of particle discoveries
Timeline of particle physics technology
Timeline of states of matter and phase transitions
Timeline of thermodynamics
| Physical sciences | Physics basics: General | Physics |
686036 | https://en.wikipedia.org/wiki/Wave%20vector | Wave vector | In physics, a wave vector (or wavevector) is a vector used in describing a wave, with a typical unit being cycle per metre. It has a magnitude and direction. Its magnitude is the wavenumber of the wave (inversely proportional to the wavelength), and its direction is perpendicular to the wavefront. In isotropic media, this is also the direction of wave propagation.
A closely related vector is the angular wave vector (or angular wavevector), with a typical unit being radian per metre. The wave vector and angular wave vector are related by a fixed constant of proportionality, 2 radians per cycle.
It is common in several fields of physics to refer to the angular wave vector simply as the wave vector, in contrast to, for example, crystallography. It is also common to use the symbol for whichever is in use.
In the context of special relativity, a wave four-vector can be defined, combining the (angular) wave vector and (angular) frequency.
Definition
The terms wave vector and angular wave vector have distinct meanings. Here, the wave vector is denoted by and the wavenumber by . The angular wave vector is denoted by and the angular wavenumber by . These are related by .
A sinusoidal traveling wave follows the equation
where:
is position,
is time,
is a function of and describing the disturbance describing the wave (for example, for an ocean wave, would be the excess height of the water, or for a sound wave, would be the excess air pressure).
is the amplitude of the wave (the peak magnitude of the oscillation),
is a phase offset,
is the (temporal) angular frequency of the wave, describing how many radians it traverses per unit of time, and related to the period by the equation
is the angular wave vector of the wave, describing how many radians it traverses per unit of distance, and related to the wavelength by the equation
The equivalent equation using the wave vector and frequency is
where:
is the frequency
is the wave vector
Direction of the wave vector
The direction in which the wave vector points must be distinguished from the "direction of wave propagation". The "direction of wave propagation" is the direction of a wave's energy flow, and the direction that a small wave packet will move, i.e. the direction of the group velocity. For light waves in vacuum, this is also the direction of the Poynting vector. On the other hand, the wave vector points in the direction of phase velocity. In other words, the wave vector points in the normal direction to the surfaces of constant phase, also called wavefronts.
In a lossless isotropic medium such as air, any gas, any liquid, amorphous solids (such as glass), and cubic crystals, the direction of the wavevector is the same as the direction of wave propagation. If the medium is anisotropic, the wave vector in general points in directions other than that of the wave propagation. The wave vector is always perpendicular to surfaces of constant phase.
For example, when a wave travels through an anisotropic medium, such as light waves through an asymmetric crystal or sound waves through a sedimentary rock, the wave vector may not point exactly in the direction of wave propagation.
In solid-state physics
In solid-state physics, the "wavevector" (also called k-vector) of an electron or hole in a crystal is the wavevector of its quantum-mechanical wavefunction. These electron waves are not ordinary sinusoidal waves, but they do have a kind of envelope function which is sinusoidal, and the wavevector is defined via that envelope wave, usually using the "physics definition". See Bloch's theorem for further details.
In special relativity
A moving wave surface in special relativity may be regarded as a hypersurface (a 3D subspace) in spacetime, formed by all the events passed by the wave surface. A wavetrain (denoted by some variable ) can be regarded as a one-parameter family of such hypersurfaces in spacetime. This variable is a scalar function of position in spacetime. The derivative of this scalar is a vector that characterizes the wave, the four-wavevector.
The four-wavevector is a wave four-vector that is defined, in Minkowski coordinates, as:
where the angular frequency is the temporal component, and the wavenumber vector is the spatial component.
Alternately, the wavenumber can be written as the angular frequency divided by the phase-velocity , or in terms of inverse period and inverse wavelength .
When written out explicitly its contravariant and covariant forms are:
In general, the Lorentz scalar magnitude of the wave four-vector is:
The four-wavevector is null for massless (photonic) particles, where the rest mass
An example of a null four-wavevector would be a beam of coherent, monochromatic light, which has phase-velocity
{for light-like/null}
which would have the following relation between the frequency and the magnitude of the spatial part of the four-wavevector:
{for light-like/null}
The four-wavevector is related to the four-momentum as follows:
The four-wavevector is related to the four-frequency as follows:
The four-wavevector is related to the four-velocity as follows:
Lorentz transformation
Taking the Lorentz transformation of the four-wavevector is one way to derive the relativistic Doppler effect. The Lorentz matrix is defined as
In the situation where light is being emitted by a fast moving source and one would like to know the frequency of light detected in an earth (lab) frame, we would apply the Lorentz transformation as follows. Note that the source is in a frame and earth is in the observing frame, .
Applying the Lorentz transformation to the wave vector
and choosing just to look at the component results in
where is the direction cosine of with respect to
So
{|cellpadding="2" style="border:2px solid #ccccff"
|
|}
Source moving away (redshift)
As an example, to apply this to a situation where the source is moving directly away from the observer (), this becomes:
Source moving towards (blueshift)
To apply this to a situation where the source is moving straight towards the observer (), this becomes:
Source moving tangentially (transverse Doppler effect)
To apply this to a situation where the source is moving transversely with respect to the observer (), this becomes:
| Physical sciences | Waves | Physics |
686108 | https://en.wikipedia.org/wiki/Cow%20shark | Cow shark | Cow sharks are a shark family, the Hexanchidae, characterized by an additional pair or pairs of gill slits. Its 37 species are placed within the 10 genera: Gladioserratus, Heptranchias, Hexanchus, Notidanodon, Notorynchus, Pachyhexanchus, Paraheptranchias, Pseudonotidanus, Welcommia, and Weltonia.
Description
Cow sharks are considered the most primitive of all the sharks, as their skeletons resemble those of ancient extinct forms, with few modern adaptations. Their excretory and digestive systems are also unspecialized, suggesting they may resemble those of primitive shark ancestors. A possible hexanchid tooth is known from the Permian of Japan, making the family a possible extant survivor of the Permian–Triassic extinction.
Their most distinctive feature, however, is the presence of a sixth, and, in two genera, a seventh, gill slit, in contrast to the five found in all other sharks. The first pair are not connected across the throat. They range from in adult body length.
These cylindrical sharks have a ventral mouth with compressed, comb-like teeth in the lower jaw and smaller, pointed teeth in the upper jaw. They have a short, angular and spinless dorsal fin. The pelvic fins are smaller than the angular pectoral fins. The caudal fin has a notch towards the end.
Biology
Cow sharks are ovoviviparous, with the mother retaining the egg cases in her body until they hatch. They feed on relatively large fish of all kinds, including other sharks, as well as on crustaceans and carrion.
Fossil Record
The only fossil records of the cow shark consist of mainly only isolated teeth. Although skeletal remains for this species have been found from the Jurassic time period, these have been very rare and have only been found in the "Late Jurassic lithographic limestones of South Germany, Nusplingen, Solnhofen, and late Cretaceous calcareous sediments of Lebanon." Due to these sparse records some scientists conclude that the cow shark is now a more "diverse and numerous species".
Species
The 37 species of cow shark (five of which are extant), in 10 genera, are:
†Gladioserratus Underwood, Goswami, Prasad, Verma & Flynn, 2011
†Gladioserratus aptiensis Pictet, 1864
†Gladioserratus dentatus Guinot, Cappetta & Adnet, 2014
†Gladioserratus magnus Underwood, Goswami, Prasad, Verma & Flynn, 2011
Heptranchias Rafinesque, 1810
Heptranchias perlo (Bonnaterre, 1788) (sharpnose sevengill shark)
†Heptranchias ezoensis Applegate & Uyeno, 1968
†Heptranchias howelli Reed, 1946
†Heptranchias karagalensis Kozlov in Zhelezko & Kozlov, 1999
†Heptranchias tenuidens Leriche, 1938
Hexanchus Rafinesque, 1810
Hexanchus griseus (Bonnaterre, 1788) (bluntnose sixgill shark)
Hexanchus nakamurai Teng, 1962 (bigeyed sixgill shark)
Hexanchus vitulus Springer & Waller, 1969 (atlantic sixgill shark)
†Hexanchus agassizi Cappetta, 1976
†Hexanchus andersoni Jordan, 1907
†Hexanchus casieri Kozlov, 1999
†Hexanchus collinsonae Ward, 1979
†Hexanchus gracilis Davis, 1887
†Hexanchus hookeri Ward, 1979
†Hexanchus microdon Agassiz, 1843
†Hexanchus tusbairicus Kozlov in Zhelezko & Kozlov, 1999
†Notidanodon Cappetta, 1975
†Notidanodon lanceolatus Woodward, 1886
†Notidanodon pectinatus Agassiz, 1843
Notorynchus Ayres, 1855
Notorynchus cepedianus (Péron, 1807) (broadnose sevengill shark)
†Notorynchus borealus Jordan & Hannibal, 1923
†Notorynchus kempi Ward, 1979
†Notorynchus lawleyi Cigala Fulgosi, 1983
†Notorynchus primigenius Agassiz, 1843
†Notorynchus serratissimus Agassiz, 1843
†Notorynchus subrecurvus Oppenheimer, 1907
†Pachyhexanchus Cappetta, 1990
†Pachyhexanchus pockrandti Ward & Thies, 1987
†Paraheptranchias Pfeil, 1981
†Paraheptranchias repens Probst, 1879
†Pseudonotidanus Underwood & Ward, 2004
†Pseudonotidanus semirugosus Underwood & Ward, 2004
†Welcommia Klug & Kriwet, 2010
†Welcommia bodeuri Cappetta, 1990
†Welcommia cappettai Klug & Kriwet, 2010
†Weltonia Ward, 1979
†Weltonia ancistrodon Arambourg, 1952
†Weltonia burnhamensis Ward, 1979
†Xampylodon Cappetta, Morrison & Adnet, 2019
Xampylodon brotzeni (Siverson, 1995)
Xampylodon dentatus (Woodward, 1886)
Xampylodon loozi (Vincent, 1876)
| Biology and health sciences | Sharks | Animals |
4455993 | https://en.wikipedia.org/wiki/Fire%20alarm%20system | Fire alarm system | A fire alarm system is a building system designed to detect, alert occupants, and alert emergency forces of the presence of fire, smoke, carbon monoxide, or other fire-related emergencies. Fire alarm systems are required in most commercial buildings. They may include smoke detectors, heat detectors, and manual fire alarm activation devices (pull stations). All components of a fire alarm system are connected to a fire alarm control panel. Fire alarm control panels are usually found in an electrical or panel room. Fire alarm systems generally use visual and audio signalization to warn the occupants of the building. Some fire alarm systems may also disable elevators, which are unsafe to use during a fire under most circumstances.
Design
Fire alarm systems are designed after fire protection requirements in a location are established, which is usually done by referencing the minimum levels of security mandated by the appropriate model building code, insurance agencies, and other authorities. A fire alarm designer will detail specific components, arrangements, and interfaces necessary to accomplish these requirements. Equipment specifically manufactured for these purposes is selected, and standardized installation methods are anticipated during the design. There are several commonly referenced standards for fire protection requirements, including:
ISO 7240-14, the international standard for the design, installation, commissioning, and service of fire detection and fire alarm systems in and around a building. This standard was published in August 2013.
NFPA 72, The National Fire Alarm Code, an established and widely used installation standard from the United States. In Canada, the Underwriters' Laboratories of Canada or ULC provides fire system installation standards.
TS 54 -14 is a technical specification (CEN/TS) for fire detection and fire alarm systems (Part 14: Guidelines for planning, design, installation, commissioning, use, and maintenance). Technical Committee CEN/TC72 has prepared this document as part of the EN 54 series of standards. This standard was published in October 2018.
There are national codes in each European country for planning, design, installation, commissioning, use, and maintenance of fire detection systems with additional requirements that are mentioned on TS 54 -14:
Germany, VdS 2095
Italy, UNI 9795
France, NF S61-936
Spain, UNE 23007-14
United Kingdom, BS 5839 Part 1
Across Oceania, the following standards outline the requirements, test methods, and performance criteria for fire detection control and indicating equipment utilised in building fire detection and fire alarm systems:
Australia AS 1603.4 (superseded), AS 4428.1 (superseded), and AS 7240.2:2018.
Parts
Fire alarm systems are composed of several distinct parts:
Fire alarm control panel (FACP), or fire alarm control unit (FACU): This component, the hub of the system, monitors inputs and system integrity, controls outputs, and transmits information.
Remote annunciator: a device that connects directly to the panel; the annunciator's main purpose is to allow emergency personnel to view the system status and take command from outside the electrical room the panel is located in. Usually, annunciators are installed by the front door, the door the fire department responds by, or in a fire command center. Annunciators typically have the same commands as those available from the panel's LCD screen, although some annunciators allow for full system control.
Primary power supply: Commonly, a commercial power utility supplies a non-switched 120 or 240-volt alternating current source. A dedicated branch circuit is connected to the fire alarm system and its constituents in non-residential applications. "Dedicated branch circuits" should not be confused with "Individual branch circuits" which supply energy to a single appliance.
Secondary (backup) power supplies: Sealed lead-acid storage batteries or other emergency sources, including generators, are used to supply energy during a primary power failure. The batteries can be either inside the bottom of the panel or inside a separate battery box installed near the panel.
Initiating devices: These components act as inputs to the fire alarm control unit and are manually or automatically activated. Examples include pull stations, heat detectors, duct detectors, and smoke detectors.
Fire alarm notification appliance: This component uses energy supplied from the fire alarm system or other stored energy source to inform the proximate persons of the need to take action, usually to evacuate. This is done using a variety of audio and visual means, ranging from pulsing incandescent lights, flashing strobe lights, horns, sirens, chimes, bells, loudspeakers, or a combination of these devices.
Building safety interfaces: This interface allows the fire alarm system to control aspects of the built environment, prepare the building for fire, and control the spread of smoke fumes by influencing air movement, lighting, process control, human transport, and availability of exits.
Initiating devices
Initiating devices used to activate a fire alarm system are either manually or automatically actuated devices. Manually actuated devices, also known as fire alarm boxes, manual pull stations, or simply pull stations, break glass stations, and (in Europe) call points, are installed to be readily located (usually near the exits of a floor or building), identified, and operated. They are usually actuated using physical interaction, such as pulling a lever or breaking glass.
Automatically actuated devices can take many forms, and are intended to respond to any number of detectable physical changes associated with fire: convected thermal energy for a heat detector, products of combustion for a smoke detector, radiant energy for a flame detector, combustion gases for a fire gas detector, and operation of sprinklers for a water-flow detector. Automatic initiating devices may use cameras and computer algorithms to analyze and respond to the visible effects of fire and movement in applications inappropriate for or hostile to other detection methods.
Notification appliances
Alarms can take many forms, but are most often either motorized bells or wall-mountable sounders or horns. They can also be speaker strobes that sound an alarm, followed by a voice evacuation message for clearer instructions on what to do. Fire alarm sounders can be set to certain frequencies and different tones, either low, medium, or high, depending on the country and manufacturer of the device. Most fire alarm systems in Europe sound like a siren with alternating frequencies. Fire alarm electronic devices are known as horns in the United States and Canada and can be continuous or set to different codes. Fire alarm warning devices can also be set to different volume levels.
Notification appliances utilize audible, visible, tactile, textual or even olfactory stimuli (odorizers) to alert the occupants of the need to evacuate or take action in the event of a fire or other emergency. Evacuation signals may consist of simple appliances that transmit uncoded information, coded appliances that transmit a predetermined pattern, and/or appliances that transmit audible and visible information such as live or prerecorded instructions and illuminated message displays. Some notification appliances are a combination of fire alarm and general emergency notification appliances, allowing both types of emergency notifications from a single device.
Emergency voice alarm communication systems
Some fire alarm systems utilize emergency voice alarm communication systems (EVAC) to provide prerecorded and manual voice messages. Voice alarm systems are typically used in high-rise buildings, arenas, and other large "defend-in-place" occupancies such as hospitals and detention facilities where total evacuation is difficult to achieve. Voice-based systems allow response personnel to conduct orderly evacuation and notify building occupants of changing event circumstances.
Audible textual appliances can be employed as part of a fire alarm system that includes EVAC capabilities. High-reliability speakers notify the occupants of the need for action concerning a fire or other emergency. These speakers are employed in large facilities where general undirected evacuation is impracticable or undesirable. The signals from the speakers are used to direct the occupant's response. The fire alarm system automatically actuates speakers in a fire event. Following a pre-alert tone, selected groups of speakers may transmit one or more prerecorded messages directing the occupants to safety. These messages may be repeated in one or more languages. The system may be controlled from one or more locations within the building, known as "fire warden stations", or from a single location designated as the building's "fire command center". From these control locations, trained personnel activating and speaking into a dedicated microphone can suppress the replay of automated messages to initiate or relay real-time voice instructions.
In highrise buildings, different evacuation messages may be played on each floor, depending on the location of the fire. The floor the fire is on along with ones above it may be told to evacuate while floors much lower may be asked to stand by.
In the United States
In the United States, fire alarm evacuation signals generally consist of a standardized audible tone, with visual notification in all public and common-use areas. Emergency signals are intended to be distinct and understandable to avoid confusion with other signals.
As per NFPA 72, 18.4.2 (2010 Edition), Temporal Code 3 is the standard audible notification in a modern system. It consists of a repeated three-pulse cycle (0.5 s on, 0.5 s off, 0.5 s on, 0.5 s off, 0.5 s on, 1.5 s off). Voice evacuation is the second most common audible notification in modern systems. Legacy systems, typically found in older schools and buildings, have used continuous tones alongside other audible notifications.
In the United Kingdom
In the United Kingdom, fire alarm evacuation signals generally consist of a two-tone siren with visual notifications in all public and common-use areas. Some fire alarm devices can emit an alert signal, which is generally used in schools for lesson changes, the start of morning break, the end of morning break, the start of lunch break, the end of lunch break, and when the school day is over.
Emergency communication systems
New codes and standards introduced around 2010, especially the new UL Standard 2572, the US Department of Defense's UFC 4-021-01 Design and O&M Mass Notification Systems, and NFPA 72 2010 edition Chapter 24, have led fire alarm system manufacturers to expand their systems voice evacuation capabilities to support new requirements for mass notification. These expanded capabilities include support for multiple types of emergency messaging (i.e., inclement weather emergency, security alerts, amber alerts). The major requirement of a mass notification system is to provide prioritized messaging according to the local facilities' emergency response plan, and the fire alarm system must support the promotion and demotion of notifications based on this emergency response plan. In the United States, emergency communication systems also have requirements for visible notification in coordination with any audible notification activities to meet the needs of the Americans with Disabilities Act.
Mass notification system categories include the following:
Tier 1 systems are in-building and provide the highest level of survivability
Tier 2 systems are out of the building and provide the middle level of survivability
Tier 3 systems are "At Your Side" and provide the lowest level of survivability
Mass notification systems often extend the notification appliances of a standard fire alarm system to include PC-based workstations, text-based digital signage, and a variety of remote notification options including email, text message, RSS feed, or IVR-based telephone text-to-speech messaging.
Residential systems
Residential fire alarm systems are commonplace. Typically, residential fire alarm systems are installed along with security alarm systems. In the United States, a residential fire alarm system is required in buildings where more than 12 smoke detectors are needed. Residential systems generally have fewer parts compared to commercial systems.
Building safety interfaces
Various equipment may be connected to a fire alarm system to facilitate evacuation or to control a fire, directly or indirectly:
Magnetic smoke door holders and retainers are wall-mounted solenoids or electromagnets controlled by a fire alarm system or detection component that magnetically secures spring-loaded self-closing smoke-tight doors in the open position. The device demagnetizes to allow automatic closure of the door on command from the fire control or upon failure of the power source, interconnection, or controlling element. Stored energy in the form of a spring or gravity can then close the door to restrict the passage of smoke from one space to another in order to facilitate evacuation and firefighting efforts. Electromagnetic fire door holders may also be hard-wired into the fire panel, radio-controlled, triggered by radio waves from a central controller connected to a fire panel, or acoustic, which learns the sound of the fire alarm and releases the door upon hearing this exact sound.
Duct-mounted smoke detectors may be mounted in such a manner as to sample the airflow through ductwork and other plenums fabricated explicitly for the transport of environmental air into conditioned spaces. As part of the fire alarm system, these detectors may be connected to the fan motor control circuits in order to stop air movement, close dampers and generally prevent the recirculation of toxic smoke and fumes from fire in occupied spaces.
Automatic initiating devices associated with elevator operation are used for emergency elevator functions, such as the recall of associated elevator cab(s). The recall will cause the elevator cabs to return to the ground level for use by fire service response teams and to ensure that cabs do not return to the floor of fire incidence, as well as preventing people from becoming trapped in the elevators. Phases of operation include primary recall (typically the ground level), alternate/secondary recall (typically a floor adjacent to the ground level—used when the fire alarm initiation occurred on the primary level), illumination of the "fire hat" indicator when an alarm occurs in the elevator hoistway or associated control room, and in some cases shunt trip (disconnect) of elevator power (generally used where the control room or hoistway is protected by fire sprinklers).
Audio public address racks can be interfaced with a fire alarm system by adding a signaling control relay module to either the rack's power supply unit or the main amplifier driving the rack. The purpose of the fire alarm system interface is usually to "mute" the background music in case of an emergency.
British fire alarm system categories
In the United Kingdom, fire alarm systems in non-domestic premises are generally designed and installed in accordance with the guidance given in BS 5839 Part 1. There are many types of fire alarm systems, each suited to different building types and applications. A fire alarm system can vary dramatically in price and complexity, from a single panel with a detector and sounder in a small commercial property to an addressable fire alarm system in a multi-occupancy building.
BS 5839 Part 1 categorizes fire alarm systems as:
"M" manual systems (no automatic fire detectors, so the building is fitted with call points and sounders).
"L" automatic systems intended for the protection of life.
"P" automatic systems intended for the protection of property.
Categories for automatic systems are further subdivided into L1 to L5 and P1 to P2.
Zoning
An important consideration when designing fire alarms is that of individual "zones". The following recommendations are found in BS 5839 Part 1:
A single zone should not exceed in floor space.
Where addressable systems are in place, two faults should not remove protection from an area greater than .
A building may be viewed as a single zone if the floor space is less than .
Where the floor space exceeds then all zones should be restricted to a single floor level.
Stairwells, lift shafts or other vertical shafts (nonstop risers) within a single fire compartment should be considered as one or more separate zones.
The maximum distance traveled within a zone to locate the fire should not exceed .
The NFPA recommends placing a list for reference near the fire alarm control panel showing the devices contained in each zone.
| Technology | Fire protection | null |
4461218 | https://en.wikipedia.org/wiki/Paracanthurus | Paracanthurus | Paracanthurus hepatus is a species of Indo-Pacific surgeonfish. A popular fish in marine aquaria, it is the only member of the genus Paracanthurus. A number of common names are attributed to the species, including regal tang, palette surgeonfish, blue tang (leading to confusion with the Atlantic species Acanthurus coeruleus), royal blue tang, hippo tang, blue hippo tang, flagtail surgeonfish, Pacific regal blue tang, and blue surgeonfish, hepatus tang, Indo-Pacific blue tang, regal blue surgeonfish, wedge-tailed tang, wedgetail blue tang. It is most closely related to genus Zebrasoma, with which it forms a sister group.
Description
Paracanthurus hepatus has a royal blue body, yellow tail, and black "palette" design. Its length at first sexual maturity is 149.2 mm. Adults typically weigh around and males are generally larger than females. The back has a broad black area that encloses at the tip of the pectoral, creating a blue oval on each side of the fish that extends in the direction of the eye. The tail has a bright yellow triangle with its apex anterior to the caudal spine and its base at the posterior end of the caudal fin. Black surrounds the triangle on the upper and lower lobes of the caudal fin, in the same hue as the back area.
Paracanthurus has small small scales, each with short ctenii on the upper surface. Scales on the caudal spine possess ctenii approximately three times as long as scales on the rest of the body. Scales anteriorly placed on the head between the eye and the upper jaw are larger with tuberculated, bony plates.
This fish has a compressed, elliptical body shape, and a terminal snout. It has nine dorsal spines, 26–28 dorsal soft rays, three anal spines, and 24–26 anal soft yellow rays, and 16 principal caudal rays with slightly projecting upper and lower lobes. Its pelvic fin is made up of one spine and three rays; this characteristic is considered a synapomorphy of the Naso and Paracanthurus genus. The caudal peduncle has a spine located in a shallow groove, which is also a characteristic of its sister taxa Zebrasoma. It has 22 vertebrae. Paracanthurus has teeth that are small, close-set, denticulated, and described as incisor-like.
Jaw morphology includes an ectopterygoid that links the palatine to the quadrate near the articular condyle. A crest is present on the anterodorsal surface of the hyomandibular. The opercle is less developed, with a distinctly convex profile.
Some slight variation in appearance is present within Paracanthurus. The lower body is yellow in west-central Indian Ocean individuals, and bluish in Pacific individuals (4). Additionally, the blue color on the trunk of Paracanthurus loses pigmentation in response to changes in light and/or melatonin levels, making its appearance slightly lighter in color at night.
Distribution
The regal blue tang can be found throughout the Indo-Pacific. It is seen in the reefs of the Philippines, Indonesia, Japan, the Great Barrier Reef of Australia, New Caledonia, Samoa, East Africa, and Sri Lanka. A single specimen was photographed in 2015 in the Mediterranean Sea off Israel. Vagrants were found two separate occasions in Hawaii, and are assumed to be aquarium releases.
Paracanthurus is an extant resident in the following territories: American Samoa; Australia; British Indian Ocean Territory; Brunei Darussalam; Christmas Island; Cocos (Keeling) Islands; Comoros; Cook Islands; Disputed Territory (Paracel Is., Spratly Is.); Fiji; French Southern Territories (Mozambique Channel Is.); Guam; India (Nicobar Is., Andaman Is.); Indonesia; Japan; Kenya; Kiribati (Kiribati Line Is., Phoenix Is., Gilbert Is.); Madagascar; Malaysia; Maldives; Marshall Islands; Mauritius; Mayotte; Micronesia, Federated States of ; Myanmar; Nauru; New Caledonia; Niue; Northern Mariana Islands; Palau; Papua New Guinea; Philippines; Réunion; Samoa; Seychelles; Singapore; Solomon Islands; Somalia; South Africa; Sri Lanka; Taiwan, Province of China; Tanzania, United Republic of; Thailand; Timor-Leste; Tokelau; Tonga; Tuvalu; United States (Hawaiian Is.); United States Minor Outlying Islands (US Line Is., Howland-Baker Is.); Vanuatu; Viet Nam; Wallis and Futuna.
Ecology
Paracanthurus is a diurnal marine species that occupies marine neritic habitats along coastlines. It is found in clear water on exposed outer reef areas or in channels with a moderate or strong current. It primarily utilizes coral reef habitats, but is also known to utilize seagrass beds, mangroves, algal beds, and rocky reefs [1]. It has an upper and lower depth limit of 2 meters and 40 meters, respectively . They live in pairs or small groups of 8 to 14 individuals. They can also be found near cauliflower corals on the seaweed side of coral reefs. Juveniles can be found in schools using Acropora for shelter. Numbers of males and females tend to maintain a 1:1 ratio.
The fish is important for coral health as it eats algae that may otherwise choke it by overgrowth.
Diet
As a juvenile, its diet consists primarily of plankton. Adults are omnivorous and feed on zooplankton, but will also graze on filamentous algae.
Life cycle
Spawning takes place year round, with a peak between April and September. Spawning occurs during late afternoon and evening hours around outer reef slopes. This event is indicated by a change in color from a uniform dark blue to a pale blue. Males aggressively court female members of the school, leading to a quick upward spawning rush toward the surface of the water during which eggs and sperm are released. The eggs are small, approximately in diameter. The eggs are pelagic, each containing a single droplet of oil for flotation. The fertilized eggs hatch in twenty-four hours, revealing small, translucent larvae with silvery abdomens and rudimentary caudal spines. Once opaque, the black "palette" pattern on juveniles do not fully connect until mature. These fish reach sexual maturity at 9–12 months of age, and at approximately 149.22 mm in size. Fecundity has a tendency to positively correlate with weight.
Fishes in the family Acanthuridae, including Paracanthurus, produce altricial larvae that receive no v. parental care. After hatching, these larvae rely on yolk reserves in order to survive their first two to three days of life.
Importance to humans
The regal blue tang is of minor commercial fisheries importance; however, it is a bait fish. The flesh has a strong odor and is not highly prized. This fish may cause ciguatera poisoning if consumed by humans. However, regal blue tangs are collected commercially for the aquarium trade. Handling the tang risks the chances of being badly cut by the caudal spine. These spines, one on each of the two sides of the caudal peduncle, the area where the tail joins the rest of the body, are extended when the fish is stressed. The quick, thrashing sideways motion of the tail can produce deep wounds that result in swelling and discoloration, posing a risk of infection. It is believed that some species of Acanthurus have venom glands while others do not. The spines are used only as a method of protection against aggressors.
The regal blue tang is one of the most common and most popular marine aquarium fish all over the world, holding its place as the 8th most traded species worldwide. In 1997–2002, 74,557 individuals were traded in official tracked sales and in 2011 approximately 95,000 Paracanthurus were imported for use as a marine ornamental fish. When harvesting Paracanthurus in the wild, juveniles are specifically targeted since they are easiest to collect due to their tendency to travel in schools. Paracanthurus for human use are harvested in the wild rather than raised in aquaculture. Conservationists encourage efforts to switch to aquaculture in order to better preserve wild populations.
In popular culture
In the 2003 Disney/Pixar film, Finding Nemo, one of the main characters, Dory (voiced by Ellen DeGeneres) is a regal blue tang suffering from short term memory loss. She and her parents, Jenny and Charlie (voiced by Diane Keaton and Eugene Levy), appear in the 2016 Disney/Pixar film sequel, Finding Dory.
After the release of Finding Nemo in 2003, popular media outlets reported a rise in demand for clownfish, the co-star alongside blue tang in the movie, to such an extent the phenomena was coined the Nemo effect. However, the legitimacy of the Nemo effect was disputed by peer-reviewed analysis. Nonetheless, a similar wave of rumors circulated the internet following the release of Finding Dory in 2016. According to peer-reviewed analysis, online searches for blue tang increased for 2–3 weeks after the release of Finding Dory, though data on imports of Paracanthurus show there was no significant increase in imports of blue tang following the release of the movie.
Conservation
The species is classified as Least concern by the IUCN. No population declines were found when last assessed by IUCN in 2010. Its current population trend is unknown and there is insufficient data on catch. While rare throughout its range, it is widespread geographically. Its distribution overlaps with multiple marine protected areas.
However, it is threatened by overexploitation (mostly for the aquarium trade) and destructive fishing practices. Individuals for aquarium trade are predominantly wild caught, so monitoring of wild populations is needed to prevent overharvest. Since it is dependent on fragile coral reef habitats, habitat destruction also constitutes pressure in parts of its range. Of individuals in the family Acanthuridae, 80% have experienced a 30% loss of coral reefs across their distribution. More research is needed to fully understand the effects of this loss on Acanthuridae. In an endeavor to mitigate the destruction of natural regal blue tang populations, efforts have been made to breed the species in captivity. It was successfully captive-bred for the first time in 2016, after a 6-year long effort by biologist Kevin Barden of Rising Tide Conservation.
| Biology and health sciences | Acanthomorpha | Animals |
4462066 | https://en.wikipedia.org/wiki/Gorgonops | Gorgonops | Gorgonops (from 'Gorgon' and 'eye, face', literally 'Gorgon eye' or 'Gorgon face') is an extinct genus of gorgonopsian therapsid, of which it is the type genus. Gorgonops lived during the Late Permian (Wuchiapingian), about 260–254 million years ago in what is now South Africa.
History of discovery
The holotype of the type species, Gorgonops torvus, was one of the first therapsids discovered. It was described by Richard Owen, who also coined the name "Dinosauria" on the basis of the first known dinosaur fossils. G. torvus was also used as the type for the family Gorgonopsidae, which was described by Richard Lydekker in 1890. Five years later, in 1895, Harry Govier Seeley used this genus to establish the larger clade of Gorgonopsia. In later years, a large number of further species and genera were designated, though many of these were later determined to be synonyms.
Gorgonops is known from the Tropidostoma and most of the Cistecephalus Assemblage Zones.
Description
Gorgonops was a medium-sized gorgonopsian, with a skull length of , depending on the species. They ranged from long from nose to tail. Gorgonops would have been one of the key predators across southern Africa during the Late Permian. Because the canines were so large, they would have had little trouble in penetrating the tough hides of some of the herbivores of the time, particularly pareiasaurs such as Pareiasaurus. Aside from the teeth, one of the key predatory advantages that Gorgonops had over prey was its semi-erect gait, compared to the sprawling gait exhibited by most prey animals of the time. Aside from allowing for more energy efficient locomotion, this allowed Gorgonops to travel at relatively high speeds.
Skull
Relative to body size, Gorgonops had a deep skull with a triangular profile when viewed from above. Perhaps the most distinctive features were two enlarged canine teeth that were so big ( long) they almost protruded beyond the lower jaw. To help protect these teeth, the lower jaws grew in such a shape so that the anterior (front) portion was thicker than the posterior (rear) portion. This form would have protected the enlarged canine teeth from accidental damage, and was similar in bone function to the flanges of bone of sabre-toothed cats in the Cenozoic.
Species
Gorgonops torvus (Owen, 1876)
The type species. The holotype is an incomplete and flattened skull, allegedly found at Mildenhall's farm (Xlu Xlu), on the Queen's Road south of Fort Beaufort, in the Eastern Cape of South Africa. A number of other specimens have been found since, all from the Tropidostoma and/or Cistecephalus Assemblage Zone(s). This was a medium-sized therapsid, with a skull about 22 cm in length. It is distinguished from other species by a longer snout, and other details of the bones of the skull. Originally considered rather simple, it is actually (according to Sigogneau-Russell) a rather specialised member of the group.
Gorgonops whaitsi (Broom, 1912)
Larger than G. torvus, with the rear of the skull wider, and other details of proportion. Originally the type species of Scymnognathus. Despite being known from a large number of specimens from the Karoo Basin, Beaufort West (Tropidostoma/Cistecephalus Assemblage Zone), the species remains poorly known. Watson and Romer placed Gorgonops and Scymnognathus in two different families, while Sigogneau-Russell placed the two species in the same genus, and considers G. whaitsi a more primitive (less derived) form.
Synonyms: Scymnognathus whaitsi (Broom, 1912)
Gorgonops longifrons (Haughton, 1915)
A large specimen known from an incomplete and flattened skull about long. Orbit larger and snout longer than G. whaitsi, from which it may have descended. Beaufort West, Tropidostoma/Cistecephalus Assemblage Zone.
Synonyms: Gorgonognathus longifrons (Haughton, 1915)
Gorgonops? eupachygnathus (Watson, 1921)
A flattened, incomplete, medium-sized skull, probably a juvenile of either G. torvus or G. whaitsi
Synonyms: Leptotrachelus eupachygnathus (Watson, 1921); Leptotracheliscops eupachygnathus (Watson, 1921)
Gorgonops? dixeyi (Haughton, 1926)
A large, incomplete and flattened skull, from Chiweta Beds, Nyassaland. Placement uncertain. Probably Low Cistecephalus Assemblage Zone equivalent (= middle of the Wuchiapingian Stage).
Synonyms: Chiwetasaurus dixeyi (Haughton, 1926)
Gorgonops? kaiseri (Broili & Schroeder, 1934)
A large (about long), incomplete skull, with a high snout and narrower in the rear than other species, from the "High Tapinocephalus zone" (earlier than the other species, most probably Pristerognathus Assemblage Zone)
Synonyms: Pachyrhinos kaiseri (Broili & Schroeder, 1934)
Classification
Below is a cladogram from Gebauer's 2007 phylogenetic analysis.
| Biology and health sciences | Proto-mammals | Animals |
4462232 | https://en.wikipedia.org/wiki/Scutosaurus | Scutosaurus | Scutosaurus ("shield lizard") is an extinct genus of pareiasaur parareptiles. Its genus name refers to large plates of armor scattered across its body. It was a large anapsid reptile that, unlike most reptiles, held its legs underneath its body to support its great weight. Fossils have been found in the Sokolki Assemblage Zone of the Malokinelskaya Formation in European Russia, close to the Ural Mountains, dating to the late Permian (Lopingian) between 264 and 252 million years ago.
Research history
The first fossils were uncovered by Russian paleontologist Vladimir Prokhorovich Amalitskii while documenting plant and animal species in the Upper Permian sediments in the Northern Dvina River, Arkhangelsk District, Northern European Russia. Amalitskii had discovered the site in 1899, and he and his wife Anne Amalitskii continued to oversee excavation until 1914, recovering numerous nearly complete and articulated (in their natural position) skeletons belonging to a menagerie of different animals. Official diagnoses of these specimens was delayed due to World War I. The first published name of what is now called Scutosaurus karpinskii was in 1917 by British zoologist David Meredith Seares Watson, who captioned a reconstruction of its scapulocoracoid based on the poorly preserved specimen PIN 2005/1535 "Pariasaurus Karpinskyi, Amalitz" (giving credit to Amalitskii for the name). Amalitskii died later that year, and the actual diagnosis of the animal was posthumously published in 1922, with the name "Pareiosaurus" karpinskii, and the holotype specimen designated as the nearly complete skeleton PIN 2005/1532. Three partial skulls were also found, but Amalitskii decided to split these off into new species as "P. elegans", "P. tuberculatus", and "P. horridus".
"Pariasaurus" and "Pareiosaurus" were both misspellings of the South African Pareiasaurus. In 1930, Soviet vertebrate paleontologist Aleksandra Paulinovna Anna Hartmann-Weinberg said that the pareiasaur material from North Dvina represents only 1 species, and that this species was distinct enough from other Pareiasaurus to justify placing it in a new genus. Though Amalitskii had used a unique genus name "Pareiosaurus", this was an accident, and she declared "Pareiosaurus" to be a junior synonym of Pareiasaurus, and erected the genus Scutosaurus. She used the spelling "karpinskyi" for the species name, but switched to karpinskii in 1937. At the same time, she also split off another unique genus "Proelginia permiana" based on the partial skull PIN 156/2. In 1968, Russian paleontologist N. N. Kalandadze and colleagues considered "Proelginia" to be synonymous with Scutosaurus. Because the remains are not well preserved, the validity of "Proelginia" is unclear. In 1987, Russian paleontologist Mikhail Feodosʹevich Ivakhnenko erected a new species "S. itilensis" based on skull fragments PIN 3919, and resurrected "S. tuberculatus", but Australian biologist Michael S. Y. Lee considered both of these actions unjustified in 2000. In 2001, Lee petitioned the International Commission on Zoological Nomenclature to formally override the spelling karpinskyi (because Watson clearly did not intend his work to be a formal description of the species, and karpinskii was much more popularly used) and list the author citation as Amalitskii, 1922.
Scutosaurus is a common fossil at the North Dvina site, and is known from 6 at least fairly complete skeletons, as well as numerous various isolated body and skull remains, and scutes (osteoderms). It is the most completely known pareiasaur. All Scutosaurus specimens date to the Upper Tatarian (Vyatskian) Russian faunal stage, which may roughly correspond with the Lopingian epoch of the Upper Permian (259–252 million years ago). In 1996, Russian paleontologist Valeriy K. Golubev described the faunal zones of the site, and listed the Scutosaurus zone as extending from roughly the middle Wuchiapingian to the middle Changhsingian, which followed the "Proelginia" stage beginning in the early Wuchiapingian.
Anatomy
Pareiasaurs were among the largest reptiles during the Permian. Scutosaurus is a rather large pareiasaur, measuring about in length and weighing up to . The entire body would have been covered in rough osteoderms, which feature a central boss with a spine. These osteoderms appear to have been largely separate from each other, but may have been closely sutured together over the shoulder and pelvis as in Elginia. The limbs bore small conical studs. Pareiasaurs feature a short stout body, and a short tail. Scutosaurus has 19 presacral vertebrae. Pareiasaurs, as well as many other common herbivorous Permian tetrapods, had a large body, barrel-shaped ribcage, and engorged limbs and pectoral and pelvic girdles. The pareiasaur shoulder blade is large, plate-like, slightly expanded towards the arm, and vertically oriented. The acromion (which connects to the large clavicle) is short and blunt, like those of early turtles, and is placed at the bottom of the shoulder blade. In articulated specimens (where the positions of the jointed bones has been preserved), there is a small gap between the clavicle and the shoulder blade. Early pareiasaurs have a cleithrum which runs along the shoulder blade, but later ones including Scutosaurus lost this. The digits on the hands and feet are short. The dorsal vertebrae are short, tall, and robust, and supported large and strongly curved ribs. The broad torso may have conferred an expansive digestive system.
The cheeks strongly flare out and terminate with long pointed bosses. The bosses of the skull are generally much more prominent than those of other pareiasaurs. The maxilla features a horn just behind the nostrils. The two holes on the back of the palate (the interpterygoid vacuities) are large. All pareiasaurs have broad snouts containing a row of closely packed, tall, blade-like, and heterodont teeth with varying numbers of cusps depending on the tooth and species. Scutosaurus has 18 teeth in the upper jaw (which feature anywhere from 9–11 cusps), and 16 in the lower (13–17 cusps). The tips of the upper teeth jut outward somewhat. The tongue side of the lower teeth bear a triangular ridge, and some random teeth in either jaw can have a cusped cingulum. Unlike other pareiasaurs, Scutosaurus has a small tubercle (a bony projection) on the base of the skull between the basal tubera.
Palaeobiology
Scutosaurus was a massively built reptile, with bony armor, and a number of spikes decorating its skull. Despite its relatively small size, Scutosaurus was heavy, and its short legs meant that it could not move at speed for long periods of time, which made it vulnerable to attack by large predators. To defend itself Scutosaurus had a thick skeleton covered with powerful muscles, especially in the neck region. Underneath the skin were rows of hard, bony plates (scutes) that acted like a form of brigandine armor.
Pareiasaurs had long been thought to be terrestrial, but it is difficult to assess their range of locomotion given the lack of modern anatomical analogues. In 1987, Ivakhnenko hypothesised that they were aquatic or amphibious due to the deep and low-lying pectoral girdle, short but engorged limbs, and thick cartilage on the limb joints, which are reminiscent of the aquatic dugong. Subsequent studies—including stable isotope analyses and footprint analyses—on various African and Eurasian remains have all reported results consistent with terrestrial behavior. Caseids have a broadly similar build to pareiasaurs, and possibly exhibited the same locomotory habits. Both have thin, porous long bones which is consistent with modern diving creatures, but the overall heavy torso would impede such a behavior. Nonetheless, similarly graviportal creatures have much thicker long bones. In 2016, zoologist Markus Lambertz and colleagues, based on the thin bones and short neck unsuited for reaching low-lying plants, suggested that caseids were predominantly aquatic and only came ashore during brief intervals. Overall, anatomical evidence seems to be at direct odds with isotopic evidence; it is possible that bone anatomy was more related to the animal's weight than its lifestyle.
Like other pareiasaurs, Scutosaurus have been shown to have had a fast initial growth rate, with cyclical growth intervals. Following this possibly relatively short juvenile period, an individual would have reached 75% of its full size, and continued growing at a slower rate for several years more. This switch from fast to slow growth potentially signaled the onset of sexual maturity.
Paleoecology
Scutosaurus comes from the Salarevskaya Formation, which has a uniformly red coloration and comprises paleosol horizons, which deposit in cyclically shallow-water and dry area. The paleosol horizons are highly variable in shape and size throughout the formation, which may mean they came from different sources (polygenic). Paleosols gradually disappear in the upper part of the formation where the thickness of beds becomes much more discontinuous as well as irregular (from some millimeters to several meters), and the appearance of blue spots which may represent the accumulation of reduced iron oxides. These beds are capped off by a carbonate shell, varying from a small knot to up to a meter (3.3 ft) thick. The paleosols and shells feature holes left behind by plant roots, but these are absent in the clay-siltstone breccias and sand lenses. The formation has typically been explained as the result of several catastrophic floods washing over arid to semi-arid plains during wet seasons, featuring several temporarily filled channels and permanently dry lakes.
Scutosaurus was a member of the pareiasaurian–gorgonopsian fauna dating to the Upper Tatarian, dominated by pareiasaurs, anomodonts, gorgonopsians, therocephalians, and cynodonts. Unlike earlier beds, dinocephalians are completely absent. Scutosaurus is identified in the Sokolki fauna, which features predominantly the former 3 groups. The only herbivore other than Scutosaurus is Vivaxosaurus. Carnivores are instead much more common, the largest identified being Inostrancevia (I. latifrons and I. alexandri); the other gorgonopsians are Pravoslavlevia and Sauroctonus progressus. Other carnivores include the therocephalian Annatherapsidus petri and the cynodont Dvinia; chroniosuchid and seymouriamorph amphibians have also been identified, including Karpinskiosaurus, Kotlassia, and Dvinosaurus. As for plants, the area has yielded various mosses, lepidophytes, ferns, and peltaspermaceaens.
| Biology and health sciences | Parareptilia | Animals |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.