id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
1383616
https://en.wikipedia.org/wiki/Shotgun%20slug
Shotgun slug
A shotgun slug is a heavy projectile (a slug) made of lead, copper, or other material and fired from a shotgun. Slugs are designed for hunting large game, and other uses, particularly in areas near human population where their short range and slow speed helps increase safety margin. The first effective modern shotgun slug was introduced by Wilhelm Brenneke in 1898, and his design remains in use today. Most shotgun slugs are designed to be fired through a cylinder bore, improved cylinder choke, rifled choke tubes, or fully rifled bores. Slugs differ from round ball lead projectiles in that they are stabilized in some manner. In the early development of firearms for the year 1875, smooth-bored barrels were not differentiated to fire either single or multiple projectiles. Single projectiles were used for larger game and warfare, though shot could be loaded as needed for small game, birds, and activities such as trench clearing and hunting. As firearms became specialized and differentiated, shotguns were still able to fire round balls though rifled muskets were far more accurate and effective. Modern slugs emerged as a way of improving on the accuracy of round balls. Early slugs were heavier in front than in the rear, similar to a Minié ball, to provide aerodynamic stabilization. Rifled barrels, rifled slugs and rifled choke tubes were developed later to provide gyroscopic spin stabilization in place of or in addition to aerodynamic stabilization. Some of these slugs are saboted sub-caliber projectiles, resulting in greatly improved external ballistics performance. A shotgun slug typically has more physical mass than a rifle bullet. For example, the lightest common .30-06 Springfield rifle bullet weighs 150 grains (), while the lightest common 12 gauge shotgun slug weighs oz (). Slugs made of low-density material, such as rubber, are available as less than lethal specialty ammunition. Uses Shotgun slugs are used to hunt medium to large game at short ranges by firing a single large projectile rather than a large number of smaller ones. In many populated areas, hunters are restricted to shotguns even for medium to large game, such as deer and elk, due to concerns about the range of modern rifle bullets. In such cases a slug will provide a longer range than a load of buckshot, which traditionally was used at ranges up to approximately , without approaching the range of a rifle. In Alaska, seasoned professional guides and wild life officials use pump-action 12 gauge shotguns loaded with slugs for defense against both black and brown bears under . Law enforcement officers are frequently equipped with shotguns. In contrast to traditional buckshot, slugs offer benefits of accuracy, range, and increased wounding potential at longer ranges while avoiding stray pellets that could injure bystanders or damage property. Further, a shotgun allows selecting a desired shell to meet the need in a variety of situations. Examples include a less-lethal cartridge in the form of a bean bag round or other less lethal buckshot and slugs. A traditional rifle would offer greater range and accuracy than slugs, but without the variety of ammunition choices and versatility. Design considerations The mass of a shotgun slug is kept within SAAMI pressure limits for shot loads in any given shotgun shell load design. Slugs are designed to pass safely through open chokes and should never be fired through tight or unknown barrels. The internal pressure of the shotshell load will actually be slightly higher than the equivalent mass slug projectile load, due to an increased resistance that occurs from a phenomenon known as shot setback. Common 12 gauge slug masses are oz (, 1 oz (, and oz (, the same weight as common birdshot payloads. Comparisons with rifle bullets A 1 oz ( Foster 12 gauge shotgun slug achieves a velocity of approximately with a muzzle energy of . slugs travel at around with a muzzle energy of . In contrast, a .30-06 Springfield bullet weighing at a velocity of achieves an energy of . A bullet at , which is a very common 30-06 Springfield load and not its true maximum potential, achieves of energy. Due to the slug's larger caliber and shape, it has greater air resistance and slows down much more quickly than a bullet. It slows to less than half its muzzle energy at , which is below the minimum recommended energy threshold for taking large game. The minimum recommended muzzle energy is ( for deer, for elk, and for moose). A slug also becomes increasingly inaccurate with distance; out to or more, with a maximum practical range of approximately . In contrast, centerfire cartridges fired from rifles can easily travel at longer ranges of or more. Shotgun slugs are best suited for use over shorter ranges. Taylor knock-out factor The Taylor knock-out factor (TKOF) was developed as a measure of stopping power for hunting game, however it is a rather flawed calculation. It is defined as the product of bullet mass, velocity and diameter, using the imperial units grains (equal to 64.79891 mg), feet per second (equal to 0.3048 m/s) and inches (equal to 25.4 mm): Some TKOF example values for shotgun slugs are: 71 TKOF for a 70 mm (2.75 in) slug (i.e. 437.5 grain (1 oz) × 1,560 FPS × 0.729 caliber /7000 = 71.07 TKOF) 80 TKOF for a 76 mm (3 in) slug (i.e. 437.5 grain (1 oz) × 1,760 FPS × 0.729 caliber /7000 = 80.19 TKOF) To compare with rifles, some TKOF example values for rifle cartridges are: Types Full-bore slugs Full-bore slugs such as the Brenneke and Foster designs use a spin-stabilization method of stabilization through the use of angled fins on the slug’s outer walls. The slight 750 RPM spin is enough to stabilize the slug because the slug’s center of pressure is so much further back than its center of mass. Saboted slugs are similar in shape to handgun bullets and airguns pellets. Their center of pressure is in front of their center of mass, meaning a higher twist rate is required to achieve proper stabilization. Most saboted slugs are designed for rifled shotgun barrels and are stabilized through gyroscopic forces from their spin. Brenneke slugs The Brenneke slug was developed by the German gun and ammunition designer Wilhelm Brenneke (1865–1951) in 1898. The original Brenneke slug is a solid lead slug with ribs cast onto the outside, much like a rifled Foster slug. There is a plastic, felt or cellulose fiber wad attached to the base that remains attached after firing. This wad serves as a gas seal, preventing the gasses from going around the projectile. The lead "ribs" that are used for inducing spin also swage through any choked bore from improved cylinder to full. The soft metal, typically lead, fins squish or swage down in size to fit through the choke to allow for an easy passage. Foster slugs The "Foster slug", invented by Karl M. Foster in 1931, and patented in 1947 (), is a type of shotgun slug designed to be fired through a smoothbore shotgun barrel, even though it commonly labeled as a "rifled" slug. A rifled slug is for smooth bores and a sabot slug is for rifled barrels. Most Foster slugs also have "rifling", which consists of ribs on the outside of the slug. Like the Brenneke, these ribs impart a rotation on the slug to correct for manufacturing irregularities, thus improving precision (i.e. group size). Similar to traditional rifling, the rotation of the slug imparts gyroscopic stabilization. Saboted slugs Saboted slugs are shotgun projectiles smaller than the bore of the shotgun and supported by a plastic sabot. The sabot is traditionally designed to engage the rifling in a rifled shotgun barrel and impart a ballistic spin onto the projectile. This differentiates them from traditional slugs, which are not designed to benefit from a rifled barrel (though neither does the other any damage). Due to the fact that they do not contact the bore, they can be made from a variety of materials including lead, copper, brass, or steel. Saboted slugs can vary in shape, but are typically bullet-shaped for increased ballistic coefficient and greater range. The sabot is generally plastic and serves to seal the bore and keep the slug centered in the barrel while it rotates with the rifling. The sabot separates from the slug after it departs the muzzle. Saboted slugs fired from rifled bores are superior in accuracy over any smooth-bored slug options with accuracy approaching that of low-velocity rifle calibers. Wad slugs A modern variant between the Foster slug and the sabot slug is the wad slug. This is a type of shotgun slug designed to be fired through a smoothbore shotgun barrel. Like the traditional Foster slug, a deep hollow is located in the rear of this slug, which serves to retain the center of mass near the front tip of the slug much like the Foster slug. However, unlike the Foster slug, a wad slug additionally has a key or web wall molded across the deep hollow, spanning the hollow, which serves to increase the structural integrity of the slug while also reducing the amount of expansion of the slug when fired, reducing the stress on the shot wad in which it rides down a barrel. Also, unlike Foster slugs that have thin fins on the outside of the slug, much like those on the Brenneke, the wad slug is shaped with an ogive or bullet shape, with a smooth outer surface. The wad slug is loaded using a standard shotshell wad, which acts like a sabot. The diameter of the wad slug is slightly less than the nominal bore diameter, being around for a 12-gauge wad slug, and a wad slug is generally cast solely from pure lead, necessary for increasing safety if the slug is ever fired through a choked shotgun. Common 12 gauge slug masses are oz ((), 1 oz ((), and oz ((), the same as common birdshot payloads. Depending on the specific stack-up, a card wad is also sometimes located between the slug and the shotshell wad, depending largely on which hull is specified, with the primary intended purpose of improving fold crimps on the loaded wad slug shell that serves to regulate fired shotshell pressures and improve accuracy. It is also possible to fire a wad slug through rifled slug barrels, and, unlike with the Foster slug where lead fouling is often a problem, a wad slug typically causes no significant leading, being nested inside a traditional shotshell wad functioning as a sabot as it travels down the shotgun barrel. Accuracy of wad slugs falls off quickly at ranges beyond , thereby largely equaling the ranges possible with Foster slugs, while still not reaching the ranges possible with traditional sabot slugs using thicker-walled sabots. Unlike the Foster slug which is traditionally roll-crimped, the wad slug is fold-crimped. Because of this important difference, and because it uses standard shotshell wads, a wad slug can easily be reloaded using any standard modern shotshell reloading press without requiring specialized roll-crimp tools. Plumbata slugs A plumbata slug has a plastic stabilizer attached to the projectile. The stabilizer may be fitted into a cavity in the bottom of the slug, or it may fit over the slug and into external notches on the slug. With the first method discarding sabots may be added. And with the second, the stabilizer may act as a sabot, but remains attached to the projectile and is commonly known as an "Impact Discarding Sabot" (IDS). Steel slugs There are some types of all-steel subcaliber slugs supported by a protective plastic sabot (the projectile would damage the barrel without a sabot). Examples include Russian "Tandem" wadcutter-type slug (the name is historical, as early versions consisted of two spherical steel balls) and ogive "UDAR" ("Strike") slug and French spool-like "Balle Blondeau" (Blondeau slug) and "Balle fleche Sauvestre" (Sauvestre flechette) with steel sabot inside expanding copper body and plastic rear empennage. Made of non-deforming steel, these slugs are well-suited to shooting in brush, but may produce overpenetration. They also may be used for disabling vehicles by firing in the engine compartment or for defeating hard body armor. Improvised slugs Wax slugs Another variant of a Great Depression–era shotgun slug design is the wax slug. These were made by hand by cutting the end off a standard birdshot loaded shotshell, shortening the shell very slightly, pouring the lead shot out, and melting paraffin, candle wax, or crayons in a pan on a stovetop, mixing the lead birdshot in the melted wax, and then using a spoon to pour the liquified wax containing part of the birdshot back into the shotshell, all while not overfilling the shotgun shell. Once the shell cooled, the birdshot was now held in a mass by the cooled paraffin, and formed a slug. No roll or fold crimp was required to hold the wax slug in the hull. These were often used to hunt deer during the Depression. Cut shell slugs Yet another expedient shotgun slug design is the cut shell. These are made by hand from a standard birdshot shell by cutting a ring around and through the hull of the shell that nearly encircles the shell, with the cut traditionally located in the middle of the wad separating the powder and shot. A small amount of the shell wall is retained, amounting to roughly a quarter of the circumference of the shotshell hull. When fired, the end of the hull separates from the base and travels through the bore and down range. Cut shells have the advantage of expedience. They can be handmade on the spot as the need arises while on a hunt for small game if a larger game animal such as a deer or a bear appears. In terms of safety, part of the shell may remain behind in the barrel, causing potential problems if not noticed and cleared before another shot is fired. Guns for use with slugs Many hunters hunt with shotgun slugs where rifle usage is not allowed, or as a way of saving the cost of a rifle by getting additional use out of their shotgun. A barrel for shooting slugs can require some special considerations. The biggest drawback of a rifled shotgun barrel is the inability to fire buckshot or birdshot accurately. While buckshot or birdshot will not rapidly damage the gun (it can wear the rifling of the barrel with long-term repeated use), the shot's spread increases nearly four-fold compared to a smooth bore, and pellets tend to form a ring-shaped pattern due to the pellets' tangential velocity moving them away from the bore line. In practical terms, the effective range of a rifled shotgun loaded with buckshot is limited to or less. Iron sights or a low magnification telescopic sight are needed for accuracy, rather than the bead sight used with shot, and an open choke is best. Since most current production shotguns come equipped with sighting ribs and interchangeable choke tubes, converting a standard shotgun to a slug gun can be as simple as attaching clamp-on sights to the rib and switching to a skeet or cylinder choke tube. There are also rifled choke tubes of cylinder bore. Many repeating shotguns have barrels that can easily be removed and replaced in under a minute with no tools, so many hunters simply use an additional barrel for shooting slugs. Slug barrels will generally be somewhat shorter, have rifle type sights or a base for a telescopic sight, and may be either rifled or smooth bore. Smooth-bore shotgun barrels are quite a bit less expensive than rifled shotgun barrels, and Foster type slugs, as well as wad slugs, can work well up to in a smooth-bore barrel. For achieving accuracy at and beyond, however, a dedicated rifled slug barrel usually provides significant advantages. Another option is to use a rifled choke in a smooth-bore barrel, at least for shotguns having a removable choke tube. Rifled chokes are considerably less expensive than a rifled shotgun barrel, and a smooth-bore barrel paired with a rifled choke is often nearly as accurate as a rifled shotgun barrel dedicated for use with slugs. There are many options in selecting shotguns for use with slugs. Improvements in slug performance have also led to some very specialized slug guns. The H&R Ultra Slug Hunter, for example, uses a heavy rifled barrel (see Accurize) to obtain high accuracy from slugs. Reloading shotgun slugs Shotgun slugs are often hand loaded, primarily to save cost but also to improve performance over that possible with commercially manufactured slug shells. In contrast, it is possible to reload slug shells with hand-cast lead slugs for less than $0.50 (c. 2013) each. The recurring cost depends heavily on which published recipe is used. Some published recipes for handloading 1 oz ( 12 gauge slugs require as much as of powder each, whereas other 12 gauge recipes for oz ( slugs require only of powder. Shotguns operate at much lower pressures than pistols and rifles, typically operating at pressures of or less, for 12 gauge shells, whereas rifles and pistols routinely are operated at pressures in excess of , and sometimes upwards of . The SAAMI maximum permitted pressure limit is only for 12 gauge and shells, including shotgun slugs, so the typical operating pressures for many shotgun shells are only slightly below the maximum permitted pressures allowed for the use of safe ammunition. This small safety margin, and the possibility of pressure varying by over with small changes in components, require great care and consistency in hand-loading. Legal issues Shotgun slugs are sometimes subject to specific regulation in many countries in the world. Legislation differs with each country. The Netherlands Large game (including deer and wild boar) hunting is only allowed with large caliber rifles; shotguns are only allowed for small and medium-sized game, up to foxes and geese. However, when a shotgun has a rifled barrel, it is considered a rifle, and it becomes legal for hunting roe deer with a minimum caliber and at a and deer or wild boar with a minimum caliber and at . Sweden Slugs fired from a single-barrel shotgun are allowed for hunting wild boar, fallow deer, and mouflon, although when hunting for wounded game there are no restrictions. The shot must be fired at a range of no more than . The hunter must also have the legal right to use a rifle for such game in order to hunt with shotgun slugs. United Kingdom Ammunition which contains no fewer than five projectiles, none of which exceed in diameter, is legal with a Section 2 Shotgun Certificate. Slugs, which contain only one projectile and usually exceed in diameter, are controlled under the Firearms Act, and require a firearms certificate to possess, which is very strictly regulated. Legal uses in the UK include, but are not restricted to, practical shotgun enthusiasts as members of clubs and at competitions, such as those run by or affiliated to the UKPSA. United States Rifled barrels for shotguns are an unusual legal issue in the United States. Firearms with rifled barrels are designed to fire single projectiles, and a firearm that is designed to fire a single projectile with a diameter greater than .50 inches (12.7 mm) is considered a destructive device and as such is severely restricted. However, the ATF has ruled that as long as the gun was designed to fire shot, and modified (by the user or the manufacturer) to fire single projectiles with the addition of a rifled barrel, then the firearm is still considered a shotgun and not a destructive device. In some areas, rifles are prohibited for hunting animals such as deer. This is generally due to safety concerns. Shotgun slugs have a far shorter maximum range than most rifle cartridges, and are safer for use near populated areas. In some areas, there are designated zones and special shotgun-only seasons for deer. This may include a modern slug shotgun, with rifled barrel and high performance sabot slugs, which provides rifle-like power and accuracy at ranges over .
Technology
Ammunition
null
1383986
https://en.wikipedia.org/wiki/Absorption%20%28chemistry%29
Absorption (chemistry)
Absorption is a physical or chemical phenomenon or a process in which atoms, molecules or ions enter the liquid or solid bulk phase of a material. This is a different process from adsorption, since molecules undergoing absorption are taken up by the volume, not by the surface (as in the case for adsorption). A more common definition is that "Absorption is a chemical or physical phenomenon in which the molecules, atoms and ions of the substance getting absorbed enter into the bulk phase (gas, liquid or solid) of the material in which it is taken up." A more general term is sorption, which covers absorption, adsorption, and ion exchange. Absorption is a condition in which something takes in another substance. In many processes important in technology, the chemical absorption is used in place of the physical process, e.g., absorption of carbon dioxide by sodium hydroxide – such acid-base processes do not follow the Nernst partition law (see: solubility). For some examples of this effect, see liquid-liquid extraction. It is possible to extract a solute from one liquid phase to another without a chemical reaction. Examples of such solutes are noble gases and osmium tetroxide. The process of absorption means that a substance captures and transforms energy. The absorbent distributes the material it captures throughout whole and adsorbent only distributes it through the surface. The process of gas or liquid which penetrate into the body of adsorbent is commonly known as absorption. Equation If absorption is a physical process not accompanied by any other physical or chemical process, it usually follows the Nernst distribution law: "the ratio of concentrations of some solute species in two bulk phases when it is equilibrium and in contact is constant for a given solute and bulk phases": The value of constant KN depends on temperature and is called partition coefficient. This equation is valid if concentrations are not too large and if the species "x" does not change its form in any of the two phases "1" or "2". If such molecule undergoes association or dissociation then this equation still describes the equilibrium between "x" in both phases, but only for the same form – concentrations of all remaining forms must be calculated by taking into account all the other equilibria. In the case of gas absorption, one may calculate its concentration by using, e.g., the Ideal gas law, c = p/RT. In alternative fashion, one may use partial pressures instead of concentrations. Types of absorption Absorption is a process that may be chemical (reactive) or physical (non-reactive). Chemical absorption Chemical absorption or reactive absorption is a chemical reaction between the absorbed and the absorbing substances. Sometimes it combines with physical absorption. This type of absorption depends upon the stoichiometry of the reaction and the concentration of its reactants. They may be carried out in different units, with a wide spectrum of phase flow types and interactions. In most cases, RA is carried out in plate or packed columns. Physical absorption Water in a solid Hydrophilic solids, which include many solids of biological origin, can readily absorb water. Polar interactions between water and the molecules of the solid favor partition of the water into the solid, which can allow significant absorption of water vapor even in relatively low humidity. Moisture regain A fiber (or other hydrophilic material) that has been exposed to the atmosphere will usually contain some water even if it feels dry. The water can be driven off by heating in an oven, leading to a measurable decrease in weight, which will gradually be regained if the fiber is returned to a 'normal' atmosphere. This effect is crucial in the textile industry – where the proportion of a material's weight made up by water is called the moisture regain.
Physical sciences
Other separations
Chemistry
1384005
https://en.wikipedia.org/wiki/Absorption%20%28electromagnetic%20radiation%29
Absorption (electromagnetic radiation)
In physics, absorption of electromagnetic radiation is how matter (typically electrons bound in atoms) takes up a photon's energy—and so transforms electromagnetic energy into internal energy of the absorber (for example, thermal energy). A notable effect of the absorption of electromagnetic radiation is attenuation of the radiation; attenuation is the gradual reduction of the intensity of light waves as they propagate through the medium. Although the absorption of waves does not usually depend on their intensity (linear absorption), in certain conditions (optics) the medium's transparency changes by a factor that varies as a function of wave intensity, and saturable absorption (or nonlinear absorption) occurs. Quantifying absorption Many approaches can potentially quantify radiation absorption, with key examples following. The absorption coefficient along with some closely related derived quantities The attenuation coefficient (NB used infrequently with meaning synonymous with "absorption coefficient") The Molar attenuation coefficient (also called "molar absorptivity"), which is the absorption coefficient divided by molarity (see also Beer–Lambert law) The mass attenuation coefficient (also called "mass extinction coefficient"), which is the absorption coefficient divided by density The absorption cross section and scattering cross-section, related closely to the absorption and attenuation coefficients, respectively "Extinction" in astronomy, which is equivalent to the attenuation coefficient Other measures of radiation absorption, including penetration depth and skin effect, propagation constant, attenuation constant, phase constant, and complex wavenumber, complex refractive index and extinction coefficient, complex dielectric constant, electrical resistivity and conductivity. Related measures, including absorbance (also called "optical density") and optical depth (also called "optical thickness") All these quantities measure, at least to some extent, how well a medium absorbs radiation. Which among them practitioners use varies by field and technique, often due simply to the convention. Measuring absorption The absorbance of an object quantifies how much of the incident light is absorbed by it (instead of being reflected or refracted). This may be related to other properties of the object through the Beer–Lambert law. Precise measurements of the absorbance at many wavelengths allow the identification of a substance via absorption spectroscopy, where a sample is illuminated from one side, and the intensity of the light that exits from the sample in every direction is measured. A few examples of absorption are ultraviolet–visible spectroscopy, infrared spectroscopy, and X-ray absorption spectroscopy. Applications Understanding and measuring the absorption of electromagnetic radiation has a variety of applications. In radio propagation, it is represented in non-line-of-sight propagation. For example, see computation of radio wave attenuation in the atmosphere used in satellite link design. In meteorology and climatology, global and local temperatures depend in part on the absorption of radiation by atmospheric gases (such as in the greenhouse effect) and land and ocean surfaces (see albedo). In medicine, X-rays are absorbed to different extents by different tissues (bone in particular), which is the basis for X-ray imaging. In chemistry and materials science, different materials and molecules absorb radiation to different extents at different frequencies, which allows for material identification. In optics, sunglasses, colored filters, dyes, and other such materials are designed specifically with respect to which visible wavelengths they absorb, and in what proportions they are in. In biology, photosynthetic organisms require that light of the appropriate wavelengths be absorbed within the active area of chloroplasts, so that the light energy can be converted into chemical energy within sugars and other molecules. In physics, the D-region of Earth's ionosphere is known to significantly absorb radio signals that fall within the high-frequency electromagnetic spectrum. In nuclear physics, absorption of nuclear radiations can be used for measuring the fluid levels, densitometry or thickness measurements. In scientific literature is known a system of mirrors and lenses that with a laser "can enable any material to absorb all light from a wide range of angles."
Physical sciences
Electromagnetic radiation
Physics
1385908
https://en.wikipedia.org/wiki/Star-nosed%20mole
Star-nosed mole
The star-nosed mole (Condylura cristata) is a small semiaquatic mole found in moist, low elevation areas in the northeastern parts of North America. It is the only extant member of the tribe Condylurini and genus Condylura, and it has more than 25,000 minute sensory receptors in touch organs, known as Eimer's organs, with which this hamster-sized mole feels its way around. With the help of its Eimer's organs, it may be perfectly poised to detect seismic wave vibrations. The nose is about in diameter with its Eimer's organs distributed on 22 appendages. Eimer's organs were first described in the European mole in 1872 by German zoologist Theodor Eimer. Other mole species also possess Eimer's organs, though they are not as specialized or numerous as in the star-nosed mole. Because the star-nosed mole is functionally blind, the snout was long suspected to be used to detect electrical activity in prey animals, though little, if any, empirical support has been found for this hypothesis. The nasal star and dentition of this species appear to be primarily adapted to exploit extremely small prey. A report in the journal Nature gives this animal the title of fastest-eating mammal, taking as little as 120 milliseconds (average: 227 ms) to identify and consume individual food items. Its brain decides in approximately eight milliseconds if prey is edible or not. This speed is at the limit of the speed of neurons. These moles are also able to smell underwater, accomplished by exhaling air bubbles onto objects or scent trails and then inhaling the bubbles to carry scents back through the nose. Ecology and behavior The star-nosed mole lives in wet lowland areas and eats small invertebrates such as aquatic insects (such as the larvae of caddisflies, midges, dragonflies, damselflies, crane flies, horseflies, predaceous diving beetles, and stoneflies), terrestrial insects, worms (such as earthworms, leeches, and other annelids), mollusks, and aquatic crustaceans, as well as small amphibians and small fish. Condylura cristata has also been found in dry meadows farther away from water. They have also been found in the Great Smoky Mountains as high as . However, the star-nose mole does prefer wet, poorly drained areas and marshes. It is a good swimmer and can forage along the bottoms of streams and ponds. Like other moles, this animal digs shallow surface tunnels for foraging; often, these tunnels exit underwater. It is active day and night and remains active in winter when it has been observed tunneling through the snow and swimming in ice-covered streams. C. cristata is particularly adept at thermoregulation, maintaining a high body temperature in a wide range of external conditions relative to other Talpid moles. This explains its ability to thrive in cold aquatic environments. Little is known about the social behavior of the species, but it is suspected to be colonial. This mole mates in late winter or early spring, and the female has one litter of typically four or five young in late spring or early summer. However, females are known to have a second litter if their first is unsuccessful. At birth, each offspring is about long, hairless, and weighs about . Their eyes, ears, and star are all sealed, only opening and becoming useful about 14 days after birth. They become independent after about 30 days and are fully mature after 10 months. Predators include the red-tailed hawk, great horned owl, barn owl, screech owl, foxes, weasels, minks, various skunks and mustelids, and large fish such as the northern pike, as well as domestic cats. Snout comparison to visual organ Vanderbilt University neuroscientist Kenneth Catania, who has studied star-nosed moles for 20 years, recently turned his research to the study of star-moles as a route to understanding general principles about how human brains process and represent sensory information. He called star-moles "a gold mine for discoveries about brains and behavior in general—and an unending source of surprises". Comparing the mole's snout to vision, his research showed that whenever the mole touched potential food, it made a sudden movement to position the smallest rays, the twin rays number 11, over the object for repeated rapid touches. He reports: "The similarities with vision were striking. The star movements resembled saccadic eye movements—quick movements of the eyes from one focus point to another—in their speed and time-course. The two 11th rays are over-represented in the primary somatosensory cortex relative to their size, just as the small visual fovea in primates—a small region in the center of the eye that yields the sharpest vision—is over-represented in primary visual cortex." He notes that some bats also have an auditory fovea for processing important echolocation frequencies, suggesting that "evolution has repeatedly come to the same solution for constructing a high-acuity sensory system: subdivide the sensory surface into a large, lower-resolution periphery for scanning a wide range of stimuli, and a small, high-resolution area that can be focused on objects of importance". The star-shaped nose is a unique organ only found on the star-nosed mole. Living as it does, in complete darkness, the star-nosed mole relies heavily on the mechanical information of its remarkable specialized nose to find and identify their invertebrate prey without using sight (since moles have small eyes and a tiny optic nerve). This organ is often recognized by its high sensitivity and reaction speed. In only eight milliseconds it can decide whether something is edible — in fact, this is one of the fastest responses to a stimulus in the animal kingdom and is the reason why the star-nosed mole was lately recognized in the Guinness Book of World Records as the world’s fastest forager. Anatomy and physiology The star-nose is a highly specialized sensory-motor organ shaped by 22 fleshy finger-like appendages, or tendrils, that ring their nostrils and are in constant motion as the mole explores its environment. The star itself is across and thus has a diameter slightly smaller than a typical human fingertip. Nevertheless, it is much larger than the nose of other mole species, covering per touch compared to covered by the noses of other mole species. This structure is divided into a high resolution central fovea region (the central 11th pair of rays) and less sensitive peripheral areas. In this way, the star works as a "tactile eye" where the peripheral rays (1–10 on each side) study the surroundings with erratic saccade-like movements and direct the 11th ray to objects of interest, just like the primate’s foveating eye. Regardless of the anatomical position of the star as a distal (protruding or extending) portion of the nose, this is neither an olfactory structure nor an extra hand. The appendages do not contain muscles or bones and are not used to manipulate objects or capture prey. They are controlled by tendons by a complex series of muscles that are attached to the skull in order to perform a role that seems to be purely mechanical. For this purpose, the star also contains a remarkably specialized epidermis covered entirely by 25,000 small raised domes or papillae of approximately in diameter. These domes, known as Eimer’s organs, are the only type of receptor organs found in the star of the star-nosed mole, which proves that the star-like structure has clearly a mechanical functioning. Eimer’s organ is a sensory structure also found in nearly all of the approximately 30 species of mole, however none contains as many as in Condylura. This large amount of specialized receptors makes the star ultrasensitive – about 6 times more sensitive than the human hand, which contains about 17,000 receptors. Each Eimer’s organ is supplied by a number of primary afferents, thus the star is densely innervated. It is associated with a Merkel cell-neurite complex at the base of the cell column, a lamellated corpuscle in the dermis just below the column and a series of free nerve endings that originate from myelinated fibers in the dermis, run through the central column and end in a ring of terminal swellings just below the outer keratinized skin surface. All 25,000 Eimer’s organs distributed along the surface of the star have this basic structure in all 22 appendages. Nevertheless, the fovea region (11th pair of rays), which is shorter in area, has a lower density of these organs – 900 Eimer's organs on its surface while some of the lateral rays have over 1500. This may sound contradictory with the fact that this region has higher resolution and an important role in foraging behavior. However, instead of having more sensory organs, this fovea region uses a different approach where the skin's surface may be more sensitive to mechanoreceptic input; it has more innervation density. Rays 1 through 9 each have about 4 fibers per Eimer's organ, while rays 10 and 11 have significantly higher innervation densities of 5.6 and 7.1 fibers per organ, respectively, revealing how the sensory periphery is differentially specialized across the star. The myelinated fibers innervating the 11 rays were photographed and counted from an enlarged photomontage by Catania and colleagues. The total number of myelinated fibers for half of the star ranged from 53,050 to 93–94; hence the total fibers for the entire star vary from roughly 106,000 to 117,000. This means that tactile information from the environment is transmitted to the somatosensory neocortex rapidly. This would be insufficient without an adequate processing system, but in the star-nosed mole, the processing also occurs at a very high speed almost approaching the upper limit at which nervous systems are capable of functioning. The threshold at which the mole can decide whether or not something is edible is of 25 milliseconds: 12 milliseconds to the neurons in the mole’s somatosensory cortex to respond to touch and other 5 milliseconds for motor commands to be conducted back to the star. In comparison, this whole process takes 600 milliseconds in humans. The importance of the star-like nose in the mole’s lifestyle is evidenced in the somatosensory representation of the nose. Electrophysiological experiments using electrodes placed on the cortex during stimulation of the body demonstrated that roughly 52% of the cortex is devoted to the nose. This means that more than half of the brain is dedicated to processing sensory information acquired by this organ, even when the nose itself is only roughly 10% of the mole’s actual size. Thus, it may be concluded that the nose substitutes for the eyes, with the information from it being processed so as to produce a tactile map of the environment under the mole’s nose. As other mammals, the somatosensory cortex of the star-nosed mole is somatotopically organized such that sensory information from adjacent parts of the nose is processed in adjacent regions of the somatosensory cortex. Therefore, the rays are also represented in the brain. The inferior most sensitive pair of rays (11th) had a larger representation on the somatosensory cortex, even when these are the shortest pair of appendages in the nose of the star-nosed mole. Other important fact of the representation of the star in the cerebral cortex is that each hemisphere had clearly visible set of 11 stripes representing the contralateral star. In some favorable cases, a smaller third set of stripes was also apparent; opposite to other body structures that have a unique representation, with each half of the body represented in the opposite cerebral hemisphere. Thus, opposite to other species, the somatosensory representation of the tactile fovea is not correlated with anatomical parameters but rather is highly correlated with patterns of behavior. Recordings from active neurons in the somatosensory cortex show that most cells (97%) responded to light tactile stimulation with a mean latency of 11.6 milliseconds. Besides a fairly large proportion of these neurons (41%) were inhibited by stimulation of proximate Eimer’s organs outside their excitatory receptive field. Consequently, the ability of the star to rapidly determine location and identity of objects is enhanced by small receptive fields and its associated collateral inhibition system that constrains cortical neurons with short latency responses. Sensitivity to mechanical stimuli In 1996, Vanderbilt PhD candidate Paul Marasco determined that the threshold by which the star-like structure senses the mechanical stimuli depends on which type of the Eimer’s organ was excited. He characterized three main classes of Eimer’s receptors, including one of slow adaptation (Tonic receptor) and two of rapid adaptation (Phasic receptor). The tonic receptor has a response similar to that of a Merkel cell-neurite complex. It has free terminals and is therefore able to detect pressure and texture with a high sensitivity and at a Random Sustained Discharge. The rapid adapting responses include a Pacinian-like response based on an (on-off) response caused by pressure and mechanical vibrations with maximum sensitivity to stimuli at a frequency of 250 Hz. The differences between both rapid responses rely on the fact that one of them only has a response during the compression phase. Frequency sensitivity Among the receptors described, Marasco identified that there were receptors relatively unresponsive to compressive stimuli but were acutely responsive to any kind of stimulus that brushed or slid across the surface of the nose (Stimuli applied with large displacements and high velocity). In contrast, there were other receptors that responded robustly to small magnitude compression of any kind but were not responsive to sweeping stimuli. The receptors that were sensitive to sweeping were maximally activated across a broad range of frequencies from 5–150 Hz at large displacements ranging from 85 to 485 μm. Conversely, the receptors that respond to compressive stimuli showed a narrow peak of maximal activity at 250–300 Hz with displacements from 10 to 28 μm. Directional sensitivity Based on the circular organization of the nerve endings and its innervation pattern in Eimer’s organs, Marasco proposed by mapping experiments that nearly all receptors in the star-nosed mole have a preference for a particular direction of applied stimuli. Thus, while one receptor elicits a strong response if compressed in one direction, it may stay "silent" when compressed in another one. Velocity sensitivity Examination of the threshold of velocity at which the receptors responded identified that the minimum velocity of cell response was 46 mm/s, corresponding to the approximate speed of the nose during foraging behavior. Transduction of the mechanical signal Taking into account that Eimer’s organ senses mechanical deformation, its mechanism of transduction can be explained in a few steps: Stimuli cause depolarization of the receptor membrane, resulting in a receptor potential and therefore a current towards the node of Ranvier. If the receptor potential is maintained and the generated current is enough to reach the node of Ranvier, then the threshold is reached to produce an action potential. When the action potential is produced, ionic channels are activated so that the mechanical impulse is transduced into electrical. This signal is carried along the axon until it reaches the SNC where the information is processed. Although these summarized steps of mechanical transduction give a hint of how the star-nosed mole converts mechanical information into potential actions, the entire mechanism of transduction behind this intricate mechanoreceptor is still unknown and further studies are required. Behavior Despite the poorly developed eyes, star-nosed moles have an intricate system to detect prey and understand their environment. During exploration, the mole's star-like appendage produces brief touches which compress Eimer’s organ against objects or substrate. When foraging, moles search in random patterns of touches lasting 20–30 milliseconds. Catania and colleagues demonstrated that the tactile organ of the star-nosed mole is preferentially innervated by putative light touch fibers. When the outer appendages of the star come into slight contact with a potential food source, the nose is quickly shifted so that one or more touches are made with the fovea (the two lower appendages; 11th pair) to explore objects of interest in more detail – especially potential prey. This foraging behavior is exceptionally fast, such that the mole may touch between 10 and 15 separate areas of the ground every second. It can locate and consume 8 separate prey items in less than 2 seconds and begin searching again for more prey in as little as 120 ms, although the average time is 227 ms. The sequence described constitutes handling time. In studies made by high-speed video, the mole always foveated to the 11th appendage to explore a food item. The use of the 11th appendage of the tactile fovea is surprisingly similar to the manner in which human eyes explore details of a visual scene. This star-like nose also enables the mole to smell underwater, something which was previously thought impossible in mammals, which requires the inspiration of air during olfaction to convey odorants to the olfactory epithelium. Although the star-like structure is not a chemoreceptor itself, it helps the star-nosed mole blow between 8 and 12 small air bubbles per second, each 0.06 to 0.1 mm in size, onto objects or scent trails. These bubbles are then drawn back into the nostrils, so that odorant molecules in the air bubbles are wafted over the olfactory receptors. The speed of the bubbles is compared to other mole's speed of sniffing. Scientists found that the bubbles are being blown towards targets such as food. Before the star-nosed mole, scientists did not believe that mammals could smell underwater, let alone smell by blowing bubbles. In 1993, Edwin Gould and colleagues proposed that the star-like proboscis had electroreceptors and that the mole was therefore able to sense the electrical field of its prey prior to mechanical inspection by its appendages. Through behavioral experiments, they demonstrated that moles preferred an artificial worm with the simulated electrical field of a live earthworm to an identical arrangement without the electrical field. They suggested, therefore, that the nerve endings in the star’s tentacles are indeed electroreceptors and that the moles move them around constantly to sample the strength of the electromagnetic field at different locations as they search for prey. However, the hypothesis remains unexplained physiologically and has not yet been accepted by the scientific community. Instead, the hypothesis proposed by Catania, in which the function of the appendage is purely tactile, seems to be more feasible and is the one currently accepted. Evolution The development of the star-like appendages suggests precursors with proto-appendages on an ancestor's snout, which became elevated over successive generations. Although this theory lacks fossil evidence or supporting comparative data, nearly all extant moles have sheets of the Eimer’s organ making up the epidermis of their snout around the nares. Also, recent studies of Catania and colleagues identified one North American species (Scapanus townsendii) with a set of proto-appendages extending caudally on the snout which exhibit a striking resemblance to the embryonic stages of the star-nosed mole, although Scapanus townsendii has only eight subdivisions on its face, rather than the 22 appendages found on the star-nosed mole. Such change is of common occurrence in evolution and is explained by the advantage of efficiently adding modules to the body plan without need to reinvent the regulatory elements which produce each module. Thus, although the star is unique in its shape and size, it seems feasible that the structure is based on a more ancestral bauplan as it comprises similarities found in a wide range of other moles and also in the molecular structure of other mammals. The picture which emerges suggests that the star-nosed mole is an extreme in mammalian evolution, having perhaps the most sensitive mechano-sensory system to be found among mammals. There are two evolutionary theories concerning the star-like nose. One proposes the development of the structure of the star as a consequence of the selective pressure of the star-nosed mole's wetland habitat. Wetlands have a dense population of small insects, so exploiting this resource requires a higher resolution sensory surface than that of other moles. Thus, a shift to the wetland environment may have provided a selective advantage for a more elaborate sensory structure. Furthermore, in wild caught moles of many species, the Eimer’s organs show obvious signs of wear and abrasion. It appears that constant and repeated contact with the soil damages the sensory organs, which have a thin keratinized epidermis. Star-nosed moles are the only species which live in the moist, muddy soil of wetlands where the less abrasive environment has allowed the delicate star-shaped structure to evolve. The second theory, that of prey profitability, explains the foraging speed of the star-nosed mole. Prey profitability (i.e. energy gained divided by prey handling time) is an essential variable for estimating the optimal diet. When handling time approaches zero, profitability increases dramatically. Due to the small invertebrate prey available in the wetlands, the star-nosed mole has developed handling times as short as 120 ms. The dazzling speed with which it forages therefore counterbalances the low nutritional value of each individual piece of food and maximizes the time available for finding more. Further, the proximity of the star-shaped nose to the mouth greatly reduces the handling time required before food can be ingested and is a major factor in how the star-nosed mole can find and eat food so quickly. Current applications in engineering The study of highly specialized systems often allows better insight into more generalized ones. The mole's striking, star-like structure may reflect a general trend in its "less remarkable" relatives, including humans. Little is known today about the molecular mechanisms of tactile transduction in mammals. As the Drosophila fly is to genetics, or the squid giant axon is to neurobiology, the star-nosed mole may be the model organism for tactile transduction. The proper understanding of its saccade-like system and associated transduction may lead in the future to the development of new types of neural prostheses. Furthermore, the outstanding speed and precision at which the mole performs may provide insights into the structural design of intelligent machines as an artificial response to the remarkable sensory ability of the star-nosed mole. Snout as related to optimal foraging theory According to optimal foraging theory, organisms forage in such a way as to maximize their net energy intake per unit time. In other words, they behave in such a way as to find, capture and consume food containing the most calories while expending the least amount of time possible in doing so. With extremely short handling times for eating very small prey, star-nosed moles can profitably consume foods that are not worth the time or effort of slower animals, and having a food category to themselves is a big advantage. Furthermore, just behind the 11th ray of the star, the star-nosed mole has modified front teeth that form the equivalent of a pair of tweezers. High-speed video shows these specialized teeth are used to pluck tiny prey from the ground. As Catania reports, "It is also clear from the behavior that the teeth and the star act as an integrated unit – the 11th rays, located directly in front on the teeth, spread apart as the teeth move forward to grasp small food. Thus, tweezer-like teeth and the exquisitely sensitive star likely evolved together as a means to better find and handle small prey quickly...it appears that the ability to rapidly detect and consume small prey was the major selective advantage that drove the evolution of the star."
Biology and health sciences
Eulipotyphla
Animals
2811075
https://en.wikipedia.org/wiki/Marine%20transgression
Marine transgression
A marine transgression is a geologic event where sea level rises relative to the land and the shoreline moves toward higher ground, resulting in flooding. Transgressions can be caused by the land sinking or by the ocean basins filling with water or decreasing in capacity. Transgressions and regressions may be caused by tectonic events like orogenies, severe climate change such as ice ages or isostatic adjustments following removal of ice or sediment load. During the Cretaceous, seafloor spreading created a relatively shallow Atlantic basin at the expense of a deeper Pacific basin. That reduced the world's ocean basin capacity and caused a rise in sea level worldwide. As a result of the sea level rise, the oceans transgressed completely across the central portion of North America and created the Western Interior Seaway from the Gulf of Mexico to the Arctic Ocean. The opposite of transgression is regression where the sea level falls relative to the land and exposes the former sea bottom. During the Pleistocene Ice Age, so much water was removed from the oceans and stored on land as year-round glaciers that the ocean regressed 120 m, exposing the Bering land bridge between Alaska and Asia. Characteristic facies Sedimentary facies changes may indicate transgressions and regressions and are often easily identified because of the unique conditions required to deposit each type of sediment. For instance, coarse-grained clastics like sand are usually deposited in nearshore, high-energy environments. Fine-grained sediments however, such as silt and carbonate muds, are deposited further offshore, in deeper, lower energy waters. Thus, a transgression reveals itself in the sedimentary column when there is a change from nearshore facies (such as sandstone) to offshore ones (such as marl), from the oldest to the youngest rocks. A regression will feature the opposite pattern, with offshore facies changing to nearshore ones. The strata represent regressions less clearly, as their upper layers are often marked by an erosional unconformity. These are both idealized scenarios; in practice identifying transgression or regressions can be more complicated. For instance, a regression may be indicated by a change from carbonates to shale only, or a transgression from sandstone to shale, and so on. Lateral changes in facies are also important; a well-marked transgression sequence in an area where an epeiric sea was deep may be only partially further away, where the water was shallow. One should consider such factors when interpreting a specific sedimentary column.
Physical sciences
Stratigraphy
Earth science
2812725
https://en.wikipedia.org/wiki/Type%201%20diabetes
Type 1 diabetes
Type 1 diabetes (T1D), formerly known as juvenile diabetes, is an autoimmune disease that occurs when pancreatic cells (beta cells) are destroyed by the body's immune system. In healthy persons, beta cells produce insulin. Insulin is a hormone required by the body to store and convert blood sugar into energy. T1D results in high blood sugar levels in the body prior to treatment. Common symptoms include frequent urination, increased thirst, increased hunger, weight loss, and other complications. Additional symptoms may include blurry vision, tiredness, and slow wound healing (owing to impaired blood flow). While some cases take longer, symptoms usually appear within weeks or a few months. The cause of type 1 diabetes is not completely understood, but it is believed to involve a combination of genetic and environmental factors. The underlying mechanism involves an autoimmune destruction of the insulin-producing beta cells in the pancreas. Diabetes is diagnosed by testing the level of sugar or glycated hemoglobin (HbA1C) in the blood. Type 1 diabetes can typically be distinguished from type 2 by testing for the presence of autoantibodies and/or declining levels/absence of C-peptide. There is no known way to prevent type 1 diabetes. Treatment with insulin is required for survival. Insulin therapy is usually given by injection just under the skin but can also be delivered by an insulin pump. A diabetic diet, exercise, and lifestyle modifications are considered cornerstones of management. If left untreated, diabetes can cause many complications. Complications of relatively rapid onset include diabetic ketoacidosis and nonketotic hyperosmolar coma. Long-term complications include heart disease, stroke, kidney failure, foot ulcers, and damage to the eyes. Furthermore, since insulin lowers blood sugar levels, complications may arise from low blood sugar if more insulin is taken than necessary. Type 1 diabetes makes up an estimated 5–10% of all diabetes cases. The number of people affected globally is unknown, although it is estimated that about 80,000 children develop the disease each year. Within the United States the number of people affected is estimated to be one to three million. Rates of disease vary widely, with approximately one new case per 100,000 per year in East Asia and Latin America and around 30 new cases per 100,000 per year in Scandinavia and Kuwait. It typically begins in children and young adults but can begin at any age. Signs and symptoms Type 1 diabetes can develop at any age, with a peak in onsets during childhood and adolescence. Adult onsets on the other hand are often initially misdiagnosed as type 2. The major sign of type 1 diabetes is very high blood sugar, which typically manifests in children as a few days to weeks of polyuria (increased urination), polydipsia (increased thirst), and weight loss after being exposed to a triggering factor including infections, strenuous exercise, dehydration. Children may also experience increased appetite, blurred vision, bedwetting, recurrent skin infections, candidiasis of the perineum, irritability, and reduced scholastic performance. Adults with type 1 diabetes tend to have more varied symptoms, which come on over months, rather than days or weeks. Prolonged lack of insulin can cause diabetic ketoacidosis, characterized by fruity breath odor, mental confusion, persistent fatigue, dry or flushed skin, abdominal pain, nausea or vomiting, and labored breathing. Blood and urine tests reveal unusually high glucose and ketones in the blood and urine. Untreated ketoacidosis can rapidly progress to loss of consciousness, coma, and death. The percentage of children whose type 1 diabetes begins with an episode of diabetic ketoacidosis varies widely by geography, as low as 15% in parts of Europe and North America, and as high as 80% in the developing world. Causes Type 1 diabetes is caused by the destruction of β-cells—the only cells in the body that produce insulin—and the consequent progressive insulin deficiency. Without insulin, the body cannot respond effectively to increases in blood sugar. Due to this, people with diabetes have persistent hyperglycemia. In 70–90% of cases, β-cells are destroyed by one's own immune system, for reasons that are not entirely clear. The best-studied components of this autoimmune response are β-cell-targeted antibodies that begin to develop in the months or years before symptoms arise. Typically, someone will first develop antibodies against insulin or the protein GAD65, followed eventually by antibodies against the proteins IA-2, IA-2β, and/or ZNT8. People with a higher level of these antibodies, especially those who develop them earlier in life, are at higher risk for developing symptomatic type 1 diabetes. The trigger for the development of these antibodies remains unclear. A number of explanatory theories have been put forward, and the cause may involve genetic susceptibility, a diabetogenic trigger, and/or exposure to an antigen. The remaining 10–30% of type 1 diabetics have β-cell destruction but no sign of autoimmunity; this is called idiopathic type 1 diabetes and its cause is unknown. Environmental Various environmental risks have been studied in an attempt to understand what triggers β-cell destroying autoimmunity. Many aspects of environment and life history are associated with slight increases in type 1 diabetes risk, however the connection between each risk and diabetes often remains unclear. Type 1 diabetes risk is slightly higher for children whose mothers are obese or older than 35, or for children born by caesarean section. Similarly, a child's weight gain in the first year of life, total weight, and BMI are associated with slightly increased type 1 diabetes risk. Some dietary habits have also been associated with type 1 diabetes risk, namely consumption of cow's milk and dietary sugar intake. Animal studies and some large human studies have found small associations between type 1 diabetes risk and intake of gluten or dietary fiber; however, other large human studies have found no such association. Many potential environmental triggers have been investigated in large human studies and found to be unassociated with type 1 diabetes risk including duration of breastfeeding, time of introduction of cow milk into the diet, vitamin D consumption, blood levels of active vitamin D, and maternal intake of omega-3 fatty acids. A longstanding hypothesis for an environmental trigger is that some viral infection early in life contributes to type 1 diabetes development. Much of this work has focused on enteroviruses, with some studies finding slight associations with type 1 diabetes, and others finding none. Large human studies have searched for, but not yet found an association between type 1 diabetes and various other viral infections, including infections of the mother during pregnancy. Conversely, some have postulated that reduced exposure to pathogens in the developed world increases the risk of autoimmune diseases, often called the hygiene hypothesis. Various studies of hygiene-related factors—including household crowding, daycare attendance, population density, childhood vaccinations, antihelminth medication, and antibiotic usage during early life or pregnancy—show no association with type 1 diabetes. Genetics Type 1 diabetes is partially caused by genetics, and family members of type 1 diabetics have a higher risk of developing the disease themselves. In the general population, the risk of developing type 1 diabetes is around 1 in 250. For someone whose parent has type 1 diabetes, the risk rises to 1–9%. If a sibling has type 1 diabetes, the risk is 6–7%. If someone's identical twin has type 1 diabetes, they have a 30–70% risk of developing it themselves. About half of the disease's heritability is due to variations in three HLA class II genes involved in antigen presentation: HLA-DRB1, HLA-DQA1, and HLA-DQB1. The variation patterns associated with increased risk of type 1 diabetes are called HLA-DR3 and HLA-DR4-HLA-DQ8, and are common in people of European descent. A pattern associated with reduced risk of type 1 diabetes is called HLA-DR15-HLA-DQ6. Large genome-wide association studies have identified dozens of other genes associated with type 1 diabetes risk, mostly genes involved in the immune system. Chemicals and drugs Some medicines can reduce insulin production or damage β cells, resulting in disease that resembles type 1 diabetes. The antiviral drug didanosine triggers pancreas inflammation in 5 to 10% of those who take it, sometimes causing lasting β-cell damage. Similarly, up to 5% of those who take the anti-protozoal drug pentamidine experience β-cell destruction and diabetes. Several other drugs cause diabetes by reversibly reducing insulin secretion, namely statins (which may also damage β cells), the post-transplant immunosuppressants cyclosporin A and tacrolimus, the leukemia drug L-asparaginase, and the antibiotic gatifloxicin. Post-operative changes One cause of Type 1 diabetes is through surgery. This is due to the destruction or intentional removal of a portion of or the entire pancreas. This greatly decreases the number of beta-islet cells capable of producing insulin, resulting in an acquired form of Type 1 diabetes known as pancreatogenic diabetes mellitus. This type of diabetes is most often seen in patients that undergo a pancreatoduodenectomy (Whipple Procedure) or a total pancreatectomy. Patients that undergo a total pancreatectomy are medically recognized as a "brittle diabetic". This nomenclature informs medical professionals the patient has no insulin production and requires extensive monitoring to avoid severe hyperglycemia or hypoglycemia. Hypoglycemia is significantly more worrying in these patients due to the potential for coma and even death. Many of these patients require an insulin pump that constantly measures their blood glucose levels. Diagnosis Diabetes is typically diagnosed by a blood test showing unusually high blood sugar. The World Health Organization defines diabetes as blood sugar levels at or above 7.0 mmol/L (126 mg/dL) after fasting for at least eight hours, or a glucose level at or above 11.1 mmol/L (200 mg/dL) two hours after an oral glucose tolerance test. The American Diabetes Association additionally recommends a diagnosis of diabetes for anyone with symptoms of hyperglycemia and blood sugar at any time at or above 11.1 mmol/L, or glycated hemoglobin (hemoglobin A1C) levels at or above 48 mmol/mol (6.5%). Once a diagnosis of diabetes is established, type 1 diabetes is distinguished from other types by a blood test for the presence of autoantibodies that target various components of the beta cell. The most commonly available tests detect antibodies against glutamic acid decarboxylase, the beta cell cytoplasm, or insulin, each of which are targeted by antibodies in around 80% of type 1 diabetics. Some healthcare providers also have access to tests for antibodies targeting the beta cell proteins IA-2 and ZnT8; these antibodies are present in around 58% and 80% of type 1 diabetics respectively. Some also test for C-peptide, a byproduct of insulin synthesis. Very low C-peptide levels are suggestive of type 1 diabetes. Management The mainstay of type 1 diabetes treatment is the regular injection of insulin to manage hyperglycemia. Injections of insulin via subcutaneous injection using either a syringe or an insulin pump are necessary multiple times per day, adjusting dosages to account for food intake, blood glucose levels, and physical activity. The goal of treatment is to maintain blood sugar in a normal range—80–130 mg/dL (4.4–7.2 mmol/L) before a meal; <180 mg/dL (10.0 mmol/L) after—as often as possible. To achieve this, people with diabetes often monitor their blood glucose levels at home. Around 83% of type 1 diabetics monitor their blood glucose by capillary blood testing: pricking the finger to draw a drop of blood, and determining blood glucose with a glucose meter. The American Diabetes Association recommends testing blood glucose around 6–10 times per day: before each meal, before exercise, at bedtime, occasionally after a meal, and any time someone feels the symptoms of hypoglycemia. Around 17% of people with type 1 diabetes use a continuous glucose monitor, a device with a sensor under the skin that constantly measures glucose levels and communicates those levels to an external device. Continuous glucose monitoring is associated with better blood sugar control than capillary blood testing alone; however, continuous glucose monitoring tends to be substantially more expensive. Healthcare providers can also monitor someone's hemoglobin A1C levels which reflect the average blood sugar over the last three months. The American Diabetes Association recommends a goal of keeping hemoglobin A1C levels under 7% for most adults and 7.5% for children. The goal of insulin therapy is to mimic normal pancreatic insulin secretion: low levels of insulin constantly present to support basic metabolism, plus the two-phase secretion of additional insulin in response to high blood sugar, then an extended phase of continued insulin secretion. This is accomplished by combining different insulin preparations that act with differing speeds and durations. The standard of care for type 1 diabetes is a bolus of rapid-acting insulin 10–15 minutes before each meal or snack, and as-needed to correct hyperglycemia. In addition, constant low levels of insulin are achieved with one or two daily doses of long-acting insulin, or by steady infusion by an insulin pump. The exact dose of insulin appropriate for each injection depends on the content of the meal/snack, and the individual person's sensitivity to insulin, and is therefore typically calculated by the individual with diabetes or a family member by hand or assistive device (calculator, chart, mobile app, etc.). People unable to manage these intensive insulin regimens are sometimes prescribed alternate plans relying on mixtures of rapid- or short-acting and intermediate-acting insulin, which are administered at fixed times along with meals of pre-planned times and carbohydrate composition. The National Institute for Health and Care Excellence now recommends closed-loop insulin systems as an option for all women with type 1 diabetes who are pregnant or planning pregnancy. A non-insulin medication approved by the U.S. Food and Drug Administration for treating type 1 diabetes is the amylin analog pramlintide, which replaces the beta-cell hormone amylin. Addition of pramlintide to mealtime insulin injections reduces the boost in blood sugar after a meal, improving blood sugar control. Occasionally, metformin, GLP-1 receptor agonists, dipeptidyl peptidase-4 inhibitors, or SGLT2 inhibitor are prescribed off-label to people with type 1 diabetes, although fewer than 5% of type 1 diabetics use these drugs. Lifestyle Besides insulin, the major way type 1 diabetics control their blood sugar is by learning how various foods impact their blood sugar levels. This is primarily done by tracking their intake of carbohydrates, the type of food with the greatest impact on blood sugar. In general, people with type 1 diabetes are advised to follow an individualized eating plan rather than a pre-decided one. There are camps for children to teach them how and when to use or monitor their insulin without parental help. As psychological stress may have a negative effect on diabetes, a number of measures have been recommended including: exercising, taking up a new hobby, or joining a charity, among others. Regular exercise is important for maintaining general health, though the effect of exercise on blood sugar can be challenging to predict. Exogenous insulin can drive down blood sugar, leaving those with diabetes at risk of hypoglycemia during and immediately after exercise, then again seven to eleven hours after exercise (called the "lag effect"). Conversely, high-intensity exercise can result in a shortage of insulin, and consequent hyperglycemia. The risk of hypoglycemia can be managed by beginning exercise when blood sugar is relatively high (above 100 mg/dL or 5.5 mmol/L), ingesting carbohydrates during or shortly after exercise, and reducing the amount of injected insulin within two hours of the planned exercise. Similarly, the risk of exercise-induced hyperglycemia can be managed by avoiding exercise when insulin levels are very low, when blood sugar is extremely high (above 350 mg/dL or 19.4 mmol/L), or when one feels unwell. While there is a lot of research on diabetes in youth, it is important to keep progressing, expanding and building our knowledge of Type 1 Diabetes and Type 2 Diabetes.  T1DM is an autoimmune disease that prevents the pancreas from producing insulin, which helps the body regulate blood sugar levels.  T2DM is a chronic disease that occurs when your body produces insulin but doesn’t use it properly or doesn’t produce enough, resulting in high blood sugar levels or hyperglycemia.  There is not a definitive answer on what type of exercise is the best for either of these metabolic diseases, but the physical activity guidelines state that children should get at least 60 minutes of moderate to vigorous intensity activity each day, which is the same for children without T1DM or T2DM. Addressing challenges is vital for enhancing care and health outcomes for pediatric diabetes patients.  Prior to engaging in physical activity, it is important to know your diagnosis and be able to manage it properly. When focusing on the type of exercise, the first two studies explicitly focus on the role of exercise in managing diabetes, with the first study exploring the benefits of HIIT for psychological and physical health in T1DM and the second focusing on the effectiveness of exercise in T2DM. The third study, however, discusses the implications of diabetes misdiagnosis, which indirectly relates to exercise by stressing the importance of managing diabetes properly before engaging in physical activity.  For the impacts that exercise has, the first and second studies highlight exercise as a beneficial tool for managing diabetes, but they present different outcomes. In T2DM, exercise is shown to be a powerful tool for improving glycemic control and reducing cardiovascular risk. In T1DM, while exercise can improve lipid profiles and other aspects of health, it doesn't necessarily lead to better blood sugar control, and there are additional barriers such as fear of hypoglycemia. The first study, however, finds that HIIT can still be effective in improving psychological well-being and exercise adherence for T1DM, showing that exercise has a broader benefit beyond just metabolic control.  All three studies provide insight into the barriers to exercise in diabetes. The first study mentions fear of hypoglycemia and low motivation as challenges for T1DM, while the second reinforces the issue of blood sugar fluctuations and the unpredictability of exercise for those with T1DM. The third study is more focused on the broader implications of misdiagnosis, but it implies that exercise could be counterproductive or harmful if a child's diabetes is misdiagnosed. When looking at other factors such as psychological and motivational, the first study places a strong emphasis on psychological factors like exercise enjoyment and intrinsic motivation, suggesting that overcoming psychological barriers is key to exercise adherence in T1DM. In contrast, the second study is more focused on the physical and metabolic effects of exercise, with less emphasis on motivation or enjoyment, although it does briefly mention that many individuals with T1DM are still motivated to exercise by the health benefits or inspiration from others.  Clinical implications show the first two studies focus on the effectiveness of exercise for specific diabetes types, while the third study highlights the importance of correct diagnosis for appropriate care. This suggests that exercise programs must be tailored not only to the type of diabetes but also to the individual’s health status and management plan. The third study emphasizes that without proper diagnosis and management, exercise recommendations could be inappropriate or unsafe. In summary, while the first two studies explore the benefits and challenges of exercise in different diabetes types, the third study stresses the importance of accurate diagnosis and management before engaging in physical activity. Together, these studies highlight the complex interactions between exercise, diabetes type, treatment, and individual challenges. Transplant In some cases, people can receive transplants of the pancreas or isolated islet cells to restore insulin production and alleviate diabetic symptoms. Transplantation of the whole pancreas is rare, due in part to the few available donor organs, and to the need for lifelong immunosuppressive therapy to prevent transplant rejection. The American Diabetes Association recommends pancreas transplant only in people who also require a kidney transplant, or who struggle to perform regular insulin therapy and experience repeated severe side effects of poor blood sugar control. Most pancreas transplants are done simultaneously with a kidney transplant, with both organs from the same donor. The transplanted pancreas continues to function for at least five years in around three quarters of recipients, allowing them to stop taking insulin. Transplantations of islets alone have become increasingly common. Pancreatic islets are isolated from a donor pancreas, then injected into the recipient's portal vein from which they implant onto the recipient's liver. In nearly half of recipients, the islet transplant continues to work well enough that they still do not need exogenous insulin five years after transplantation. If a transplant fails, recipients can receive subsequent injections of islets from additional donors into the portal vein. Like with whole pancreas transplantation, islet transplantation requires lifelong immunosuppression and depends on the limited supply of donor organs; it is therefore similarly limited to people with severe poorly controlled diabetes and those who have had or are scheduled for a kidney transplant. Donislecel (Lantidra) allogeneic (donor) pancreatic islet cellular therapy was approved for medical use in the United States in June 2023. Pathogenesis Type 1 diabetes is a result of the destruction of pancreatic beta cells, although what triggers that destruction remains unclear. People with type 1 diabetes tend to have more CD8+ T-cells and B-cells that specifically target islet antigens than those without type 1 diabetes, suggesting a role for the adaptive immune system in beta cell destruction. Type 1 diabetics also tend to have reduced regulatory T cell function, which may exacerbate autoimmunity. Destruction of beta cells results in inflammation of the islet of Langerhans, called insulitis. These inflamed islets tend to contain CD8+ T-cells and – to a lesser extent – CD4+ T cells. Abnormalities in the pancreas or the beta cells themselves may also contribute to beta-cell destruction. The pancreases of people with type 1 diabetes tend to be smaller, lighter, and have abnormal blood vessels, nerve innervations, and extracellular matrix organization. In addition, beta cells from people with type 1 diabetes sometimes overexpress HLA class I molecules (responsible for signaling to the immune system) and have increased endoplasmic reticulum stress and issues with synthesizing and folding new proteins, any of which could contribute to their demise. The mechanism by which the beta cells actually die likely involves both necroptosis and apoptosis, induced or exacerbated by CD8+ T-cells and macrophages. Necroptosis can be triggered by activated T cells – which secrete toxic granzymes and perforin – or indirectly as a result of reduced blood flow or the generation of reactive oxygen species. As some beta cells die, they may release cellular components that amplify the immune response, exacerbating inflammation and cell death. Pancreases from people with type 1 diabetes also have signs of beta cell apoptosis, linked to activation of the janus kinase and TYK2 pathways. Partial ablation of beta-cell function is enough to cause diabetes; at diagnosis, people with type 1 diabetes often still have detectable beta-cell function. Once insulin therapy is started, many people experience a resurgence in beta-cell function, and can go some time with little-to-no insulin treatment – called the "honeymoon phase". This eventually fades as beta-cells continue to be destroyed, and insulin treatment is required again. Beta-cell destruction is not always complete, as 30–80% of type 1 diabetics produce small amounts of insulin years or decades after diagnosis. Alpha cell dysfunction Onset of autoimmune diabetes is accompanied by impaired ability to regulate the hormone glucagon, which acts in antagonism with insulin to regulate blood sugar and metabolism. Progressive beta cell destruction leads to dysfunction in the neighboring alpha cells which secrete glucagon, exacerbating excursions away from euglycemia in both directions; overproduction of glucagon after meals causes sharper hyperglycemia, and failure to stimulate glucagon upon hypoglycemia prevents a glucagon-mediated rescue of glucose levels. Hyperglucagonemia Onset of type 1 diabetes is followed by an increase in glucagon secretion after meals. Increases have been measured up to 37% during the first year of diagnosis, while C-peptide levels (indicative of islet-derived insulin), decline by up to 45%. Insulin production will continue to fall as the immune system destroys beta cells, and islet-derived insulin will continue to be replaced by therapeutic exogenous insulin. Simultaneously, there is measurable alpha cell hypertrophy and hyperplasia in the early stage of the disease, leading to expanded alpha cell mass. This, together with failing beta cell insulin secretion, begins to account for rising glucagon levels that contribute to hyperglycemia. Some researchers believe glucagon dysregulation to be the primary cause of early stage hyperglycemia. Leading hypotheses for the cause of postprandial hyperglucagonemia suggest that exogenous insulin therapy is inadequate to replace the lost intraislet signalling to alpha cells previously mediated by beta cell-derived pulsatile insulin secretion. Under this working hypothesis intensive insulin therapy has attempted to mimic natural insulin secretion profiles in exogenous insulin infusion therapies. In young people with type 1 diabetes, unexplained deaths could be due to nighttime hypoglycemia triggering abnormal heart rhythms or cardiac autonomic neuropathy, damage to nerves that control the function of the heart. Hypoglycemic glucagon impairment Glucagon secretion is normally increased upon falling glucose levels, but normal glucagon response to hypoglycemia is blunted in type 1 diabetics. Beta cell glucose sensing and subsequent suppression of administered insulin secretion is absent, leading to islet hyperinsulinemia which inhibits glucagon release. Autonomic inputs to alpha cells are much more important for glucagon stimulation in the moderate to severe ranges of hypoglycemia, yet the autonomic response is blunted in a number of ways. Recurrent hypoglycemia leads to metabolic adjustments in the glucose sensing areas of the brain, shifting the threshold for counter regulatory activation of the sympathetic nervous system to lower glucose concentration. This is known as hypoglycemic unawareness. Subsequent hypoglycemia is met with impairment in sending of counter regulatory signals to the islets and adrenal cortex. This accounts for the lack of glucagon stimulation and epinephrine release that would normally stimulate and enhance glucose release and production from the liver, rescuing the diabetic from severe hypoglycemia, coma, and death. Numerous hypotheses have been produced in the search for a cellular mechanism of hypoglycemic unawareness, and a consensus has yet to be reached. The major hypotheses are summarized in the following table: In addition, autoimmune diabetes is characterized by a loss of islet specific sympathetic innervation. This loss constitutes an 80–90% reduction of islet sympathetic nerve endings, happens early in the progression of the disease, and is persistent through the life of the patient. It is linked to the autoimmune aspect of type 1 diabetics and fails to occur in type 2 diabetics. Early in the autoimmune event, the axon pruning is activated in islet sympathetic nerves. Increased BDNF and ROS that result from insulitis and beta cell death stimulate the p75 neurotrophin receptor (p75NTR), which acts to prune off axons. Axons are normally protected from pruning by activation of tropomyosin receptor kinase A (Trk A) receptors by NGF, which in islets is primarily produced by beta cells. Progressive autoimmune beta cell destruction, therefore, causes both the activation of pruning factors and the loss of protective factors to the islet sympathetic nerves. This unique form of neuropathy is a hallmark of type 1 diabetes, and plays a part in the loss of glucagon rescue of severe hypoglycemia. Complications The most pressing complication of type 1 diabetes are the always present risks of poor blood sugar control: severe hypoglycemia and diabetic ketoacidosis. Hypoglycemia – typically blood sugar below 70 mg/dL (3.9 mmol/L) – triggers the release of epinephrine, and can cause people to feel shaky, anxious, or irritable. People with hypoglycemia may also experience hunger, nausea, sweats, chills, headaches, dizziness, and a fast heartbeat. Some feel lightheaded, sleepy, or weak. Severe hypoglycemia can develop rapidly, causing confusion, coordination problems, loss of consciousness, and seizure. On average, people with type 1 diabetes experience a hypoglycemia event that requires assistance of another 16–20 times in 100 person-years, and an event leading to unconsciousness or seizure 2–8 times per 100 person-years. The American Diabetes Association recommends treating hypoglycemia by the "15–15 rule": eat 15 grams of carbohydrates, then wait 15 minutes before checking blood sugar; repeat until blood sugar is at least 70 mg/dL (3.9 mmol/L). Severe hypoglycemia that impairs someone's ability to eat is typically treated with injectable glucagon, which triggers glucose release from the liver into the bloodstream. People with repeated bouts of hypoglycemia can develop hypoglycemia unawareness, where the blood sugar threshold at which they experience symptoms of hypoglycemia decreases, increasing their risk of severe hypoglycemic events. Rates of severe hypoglycemia have generally declined due to the advent of rapid-acting and long-acting insulin products in the 1990s and early 2000s; however, acute hypoglycemia still causes 4–10% of type 1 diabetes-related deaths. The other persistent risk is diabetic ketoacidosis – a state where lack of insulin results in cells burning fat rather than sugar, producing toxic ketones as a byproduct. Ketoacidosis symptoms can develop rapidly, with frequent urination, excessive thirst, nausea, vomiting, and severe abdominal pain all common. More severe ketoacidosis can result in labored breathing, and loss of consciousness due to cerebral edema. People with type 1 diabetes experience diabetic ketoacidosis 1–5 times per 100 person-years, the majority of which result in hospitalization. 13–19% of type 1 diabetes-related deaths are caused by ketoacidosis, making ketoacidosis the leading cause of death in people with type 1 diabetes less than 58 years old. Long-term complications In addition to the acute complications of diabetes, long-term hyperglycemia results in damage to the small blood vessels throughout the body. This damage tends to manifest particularly in the eyes, nerves, and kidneys causing diabetic retinopathy, diabetic neuropathy, and diabetic nephropathy respectively. In the eyes, prolonged high blood sugar causes the blood vessels in the retina to become fragile. People with type 1 diabetes also have increased risk of cardiovascular disease, which is estimated to shorten the life of the average type 1 diabetic by 8–13 years. Cardiovascular disease as well as neuropathy may have an autoimmune basis, as well. Women with type 1 DM have a 40% higher risk of death as compared to men with type 1 DM. About 12 percent of people with type 1 diabetes have clinical depression. About 6 percent of people with type 1 diabetes also have celiac disease, but in most cases there are no digestive symptoms or are mistakenly attributed to poor control of diabetes, gastroparesis, or diabetic neuropathy. In most cases, celiac disease is diagnosed after onset of type 1 diabetes. The association of celiac disease with type 1 diabetes increases the risk of complications, such as retinopathy and mortality. This association can be explained by shared genetic factors, and inflammation or nutritional deficiencies caused by untreated celiac disease, even if type 1 diabetes is diagnosed first. Urinary tract infection People with diabetes show an increased rate of urinary tract infection. The reason is bladder dysfunction is more common in people with diabetes than people without diabetes due to diabetes nephropathy. When present, nephropathy can cause a decrease in bladder sensation, which in turn, can cause increased residual urine, a risk factor for urinary tract infections. Sexual dysfunction Sexual dysfunction in people with diabetes is often a result of physical factors such as nerve damage and poor circulation, and psychological factors such as stress and/or depression caused by the demands of the disease. The most common sexual issues in males with diabetes are problems with erections and ejaculation: "With diabetes, blood vessels supplying the penis's erectile tissue can get hard and narrow, preventing the adequate blood supply needed for a firm erection. The nerve damage caused by poor blood glucose control can also cause ejaculate to go into the bladder instead of through the penis during ejaculation, called retrograde ejaculation. When this happens, semen leaves the body in the urine." Another cause of erectile dysfunction is reactive oxygen species created as a result of the disease. Antioxidants can be used to help combat this. Sexual problems are common in women who have diabetes, including reduced sensation in the genitals, dryness, difficulty/inability to orgasm, pain during sex, and decreased libido. Diabetes sometimes decreases estrogen levels in females, which can affect vaginal lubrication. Less is known about the correlation between diabetes and sexual dysfunction in females than in males. Oral contraceptive pills can cause blood sugar imbalances in women who have diabetes. Dosage changes can help address that, at the risk of side effects and complications. Women with type 1 diabetes show a higher than normal rate of polycystic ovarian syndrome (PCOS). The reason may be that the ovaries are exposed to high insulin concentrations since women with type 1 diabetes can have frequent hyperglycemia. Autoimmune disorders People with type 1 diabetes are at an increased risk for developing several autoimmune disorders, particularly thyroid problems – around 20% of people with type 1 diabetes have hypothyroidism or hyperthyroidism, typically caused by Hashimoto thyroiditis or Graves' disease respectively. Celiac disease affects 2–8% of people with type 1 diabetes, and is more common in those who were younger at diabetes diagnosis, and in white people. Type 1 diabetics are also at increased risk of rheumatoid arthritis, lupus, autoimmune gastritis, pernicious anemia, vitiligo, and Addison's disease. Conversely, complex autoimmune syndromes caused by mutations in the immunity-related genes AIRE (causing autoimmune polyglandular syndrome), FoxP3 (causing IPEX syndrome), or STAT3 include type 1 diabetes in their effects. Prevention There is no way to prevent type 1 diabetes; however, the development of diabetes symptoms can be delayed in some people who are at high risk of developing the disease. In 2022 the FDA approved an intravenous injection of teplizumab to delay the progression of type 1 diabetes in those older than eight who have already developed diabetes-related autoantibodies and problems with blood sugar control. In that population, the anti-CD3 monoclonal antibody teplizumab can delay the development of type 1 diabetes symptoms by around two years. In addition to anti-CD3 antibodies, several other immunosuppressive agents have been trialled with the aim of preventing beta cell destruction. Large trials of cyclosporine treatment suggested that cyclosporine could improve insulin secretion in those recently diagnosed with type 1 diabetes; however, people who stopped taking cyclosporine rapidly stopped making insulin, and cyclosporine's kidney toxicity and increased risk of cancer prevented people from using it long-term. Several other immunosuppressive agents – prednisone, azathioprine, anti-thymocyte globulin, mycophenolate, and antibodies against CD20 and IL2 receptor α – have been the subject of research, but none have provided lasting protection from development of type 1 diabetes. There have also been clinical trials attempting to induce immune tolerance by vaccination with insulin, GAD65, and various short peptides targeted by immune cells during type 1 diabetes; none have yet delayed or prevented development of disease. Several trials have attempted dietary interventions with the hope of reducing the autoimmunity that leads to type 1 diabetes. Trials that withheld cow's milk or gave infants formula free of bovine insulin decreased the development of β-cell-targeted antibodies, but did not prevent the development of type 1 diabetes. Similarly, trials that gave high-risk individuals injected insulin, oral insulin, or nicotinamide did not prevent diabetes development. Other strategies under investigation for the prevention of type 1 diabetes include gene therapy, stem cell therapy, and modulation of the gut microbiome. Gene therapy approaches, while still in early stages, aim to alter genetic factors that contribute to beta-cell destruction by editing immune responses. Stem cell therapies are also being researched, with the hope that they can either regenerate insulin-producing beta cells or protect them from immune attack. Trials using stem cells to restore beta cell function or regulate immune responses are ongoing. Modifying the gut microbiota through the use of probiotics, prebiotics, or specific diets has also gained attention. Some evidence suggests that the gut microbiome plays a role in immune regulation, and researchers are investigating whether altering the microbiome could reduce the risk of autoimmunity and, subsequently, type 1 diabetes. Tolerogenic therapies, which seek to induce immune tolerance to beta-cell antigens, are another area of interest. Techniques such as using dendritic cells or regulatory T cells engineered to promote tolerance to beta cells are being studied in clinical trials, though these approaches remain experimental. There is also a hypothesis that certain viral infections, particularly enteroviruses, may trigger type 1 diabetes in genetically predisposed individuals. Researchers are investigating whether vaccines targeting these viruses could reduce the risk of developing the disease. Combination immunotherapies are being explored as well, with the aim of achieving more durable immune protection by using multiple agents together. For example, anti-CD3 antibodies may be combined with other immunomodulatory agents such as IL-1 blockers or checkpoint inhibitors. Finally, researchers are studying how environmental factors such as infections, diet, and stress may affect immune regulation through epigenetic modifications. The hope is that targeting these epigenetic changes could delay or prevent the onset of type 1 diabetes in high-risk individuals. Epidemiology Type 1 diabetes makes up an estimated 10–15% of all diabetes cases or 11–22 million cases worldwide. Symptoms can begin at any age, but onset is most common in children, with diagnoses slightly more common in 5 to 7 year olds, and much more common around the age of puberty. In contrast to most autoimmune diseases, type 1 diabetes is slightly more common in males than in females. In 2006, type 1 diabetes affected 440,000 children under 14 years of age and was the primary cause of diabetes in those less than 15 years of age. Rates vary widely by country and region. Incidence is highest in Scandinavia, at 30–60 new cases per 100,000 children per year, intermediate in the U.S. and Southern Europe at 10–20 cases per 100,000 per year, and lowest in China, much of Asia, and South America at 1–3 cases per 100,000 per year. In the United States, type 1 and 2 diabetes affected about 208,000 youths under the age of 20 in 2015. Over 18,000 youths are diagnosed with Type 1 diabetes every year. Every year about 234,051 Americans die due to diabetes (type I or II) or diabetes-related complications, with 69,071 having it as the primary cause of death. In Australia, about one million people have been diagnosed with diabetes and of this figure 130,000 people have been diagnosed with type 1 diabetes. Australia ranks 6th-highest in the world with children under 14 years of age. Between 2000 and 2013, 31,895 new cases were established, with 2,323 in 2013, a rate of 10–13 cases per 100,00 people each year. Aboriginals and Torres Strait Islander people are less affected. Since the 1950s, the incidence of type 1 diabetes has been gradually increasing across the world by an average 3–4% per year. The increase is more pronounced in countries that began with a lower incidence of type 1 diabetes. A single 2023 study suggested a relationship between COVID-19 infection and the incidence of type 1 diabetes in children; confirmatory studies have not appeared to date. Type 1 diabetes in youth Type 1 diabetes, also known as "Juvenile-onset" Diabetes is increasing in children and adolescents under the age of 15. Type 1 diabetes is an autoimmune disease where the body attacks the beta-cells produced by the pancreas; therefore, causing the body to have insulin deficiency. Type 1 diabetes is mainly diagnosed in children, and the number of diagnoses is increasing all around the world. Management with exercise Children with type 1 diabetes typically manage their blood sugar levels with regular insulin injections; however, exercise can also play a vital role in the management of type 1 diabetes. For youth with type 1 diabetes, exercise is correlated with greater blood sugar control. HbA1c levels are reduced significantly when children with type 1 diabetes participate in structured exercise interventions. In one study, Garcia-Hermoso and colleagues found that high-intensity exercise, concurrent training, exercise intervention lasting 24 weeks or more, and exercise sessions lasting 60 minutes or more caused greater HbA1c reduction in children with type 1 diabetes. Garcia-Hermoso and colleagues also observed that exercise sessions lasting 60 minutes or more, high-intensity exercise, and concurrent training interventions led to a decrease in insulin dosage per day. Additionally, Petschnig and colleagues looked at the effect of strength training on blood sugar levels and they found that children with type 1 diabetes who performed strength training exercises for 17 weeks did not experience any change in HbA1c levels, but after 32 weeks of training experienced a significant decrease in HbA1c levels. Petschnig and colleagues also observed blood sugar levels decrease significantly following strength training sessions. Finally, the Diabetes Research in Children Network Study Group found that children who participated in prolonged aerobic exercise after school experienced a decrease in plasma glucose levels 40% below their baseline values. The Diabetes Research in Children Network Study Group observed blood sugar levels decrease rapidly in the first 15 minutes of exercise and continue to drop during the 75-minute session. The Diabetes Research Group also found that after participating in prolonged aerobic exercise, 83% of participants had at least a 25% decrease in blood sugar levels. High-intensity and concurrent training interventions, strength training, and prolonged aerobic exercise all have been shown to help reduce HbA1c and blood glucose levels in children with type 1 diabetes; therefore, demonstrating that exercise plays a vital role in the management of type 1 diabetes. History The connection between diabetes and pancreatic damage was first described by the German pathologist Martin Schmidt, who in a 1902 paper noted inflammation around the pancreatic islet of a child who had died of diabetes. The connection between this inflammation and diabetes onset was further developed through the 1920s by Shields Warren, and the term "insulitis" was coined by Hanns von Meyenburg in 1940 to describe the phenomenon. Type 1 diabetes was described as an autoimmune disease in the 1970s, based on observations that autoantibodies against islets were discovered in diabetics with other autoimmune deficiencies. It was also shown in the 1980s that immunosuppressive therapies could slow disease progression, further supporting the idea that type 1 diabetes is an autoimmune disorder. The name juvenile diabetes was used earlier as it often first is diagnosed in childhood. Society and culture Type 1 and 2 diabetes was estimated to cause $10.5 billion in annual medical costs ($875 per month per diabetic) and an additional $4.4 billion in indirect costs ($366 per month per person with diabetes) in the U.S. In the United States $245 billion every year is attributed to diabetes. Individuals diagnosed with diabetes have 2.3 times the health care costs as individuals who do not have diabetes. One in ten health care dollars are spent on individuals with type 1 and 2 diabetes. Research Funding for research into type 1 diabetes originates from government, industry (e.g., pharmaceutical companies), and charitable organizations. Government funding in the United States is distributed via the National Institutes of Health, and in the UK via the National Institute for Health and Care Research or the Medical Research Council. The Juvenile Diabetes Research Foundation (JDRF), founded by parents of children with type 1 diabetes, is the world's largest provider of charity-based funding for type 1 diabetes research. Other charities include the American Diabetes Association, Diabetes UK, Diabetes Research and Wellness Foundation, Diabetes Australia, and the Canadian Diabetes Association. Artificial pancreas There has also been substantial effort to develop a fully automated insulin delivery system or "artificial pancreas" that could sense glucose levels and inject appropriate insulin without conscious input from the user. Current "hybrid closed-loop systems" use a continuous glucose monitor to sense blood sugar levels, and a subcutaneous insulin pump to deliver insulin; however, due to the delay between insulin injection and its action, current systems require the user to initiate insulin before taking meals. Several improvements to these systems are currently undergoing clinical trials in humans, including a dual-hormone system that injects glucagon in addition to insulin, and an implantable device that injects insulin intraperitoneally where it can be absorbed more quickly. Disease models Various animal models of disease are used to understand the pathogenesis and etiology of type 1 diabetes. Currently available models of T1D can be divided into spontaneously autoimmune, chemically induced, virus induced and genetically induced. The nonobese diabetic (NOD) mouse is the most widely studied model of type 1 diabetes. It is an inbred strain that spontaneously develops type 1 diabetes in 30–100% of female mice depending on housing conditions. Diabetes in NOD mice is caused by several genes, primarily MHC genes involved in antigen presentation. Like diabetic humans, NOD mice develop islet autoantibodies and inflammation in the islet, followed by reduced insulin production and hyperglycemia. Some features of human diabetes are exaggerated in NOD mice, namely the mice have more severe islet inflammation than humans, and have a much more pronounced sex bias, with females developing diabetes far more frequently than males. In NOD mice the onset of insulitis occurs at 3–4 weeks of age. The islets of Langerhans are infiltrated by CD4+, CD8+ T lymphocytes, NK cells, B lymphocytes, dendritic cells, macrophages, and neutrophils, similar to the disease process in humans. In addition to sex, breeding conditions, gut microbiome composition or diet also influence the onset of T1D. The BioBreeding Diabetes-Prone (BB) rat is another widely used spontaneous experimental model for T1D. The onset of diabetes occurs, in up to 90% of individuals (regardless of sex) at 8–16 weeks of age. During insulitis, the pancreatic islets are infiltrated by T lymphocytes, B lymphocytes, macrophages, and NK cells, with the difference from the human course of insulitis being that CD4 + T lymphocytes are markedly reduced and CD8 + T lymphocytes are almost absent. The aforementioned lymphopenia is the major drawback of this model. The disease is characterized by hyperglycemia, hypoinsulinemia, weight loss, ketonuria, and the need for insulin therapy for survival. BB Rats are used to study the genetic aspects of T1D and are also used for interventional studies and diabetic nephropathy studies. LEW-1AR1 / -iddm rats are derived from congenital Lewis rats and represent a rarer spontaneous model for T1D. These rats develop diabetes at about 8–9 weeks of age with no sex differences unlike NOD mice. In LEW mice, diabetes presents with hyperglycemia, glycosuria, ketonuria, and polyuria. The advantage of the model is the progression of the prediabetic phase, which is very similar to human disease, with infiltration of islet by immune cells about a week before hyperglycemia is observed. This model is suitable for intervention studies or for the search for predictive biomarkers. It is also possible to observe individual phases of pancreatic infiltration by immune cells. The advantage of congenic LEW mice is also the good viability after the manifestation of T1D (compared to NOD mice and BB rats). Chemically induced The chemical compounds aloxan and streptozotocin (STZ) are commonly used to induce diabetes and destroy β-cells in mouse/rat animal models. In both cases, it is a cytotoxic analog of glucose that passes GLUT2 transport and accumulates in β-cells, causing their destruction. The chemically induced destruction of β-cells leads to decreased insulin production, hyperglycemia, and weight loss in the experimental animal. The animal models prepared in this way are suitable for research into blood sugar-lowering drugs and therapies (e.g. for testing new insulin preparations). They are also the most commonly used genetically induced T1D model is the so-called AKITA mouse (originally C57BL/6NSIc mouse). The development of diabetes in AKITA mice is caused by a spontaneous point mutation in the Ins2 gene, which is responsible for the correct composition of insulin in the endoplasmic reticulum. Decreased insulin production is then associated with hyperglycemia, polydipsia, and polyuria. If severe diabetes develops within 3–4 weeks, AKITA mice survive no longer than 12 weeks without treatment intervention. The description of the etiology of the disease shows that, unlike spontaneous models, the early stages of the disease are not accompanied by insulitis. AKITA mice are used to test drugs targeting endoplasmic reticulum stress reduction, to test islet transplants, and to study diabetes-related complications such as nephropathy, sympathetic autonomic neuropathy, and vascular disease. for testing transplantation therapies. Their advantage is mainly the low cost, the disadvantage is the cytotoxicity of the chemical compounds. Genetically induced Type 1 diabetes (T1D) is a multifactorial autoimmune disease with a strong genetic component. Although environmental factors also play a significant role, the genetic susceptibility to T1D is well established, with several genes and loci implicated in disease development. The most significant genetic contribution to T1D comes from the human leukocyte antigen (HLA) region on chromosome 6p21. The HLA class II genes, particularly HLA-DR and HLA-DQ, are the strongest genetic determinants of T1D risk. Specific combinations of alleles such as HLA-DR3-DQ2 and HLA-DR4-DQ8 have been associated with a higher risk of developing T1D. Individuals carrying both of these haplotypes (heterozygous DR3/DR4) are at an even greater risk. These HLA variants are thought to influence the immune system’s ability to differentiate between self and non-self antigens, leading to the autoimmune destruction of pancreatic beta cells. Conversely, some HLA haplotypes, such as HLA-DR15-DQ6, are associated with protection against T1D, suggesting that variations in these immune-related genes can either predispose or protect against the disease. In addition to HLA, multiple non-HLA genes have been implicated in T1D susceptibility. Genome-wide association studies (GWAS) have identified over 50 loci associated with an increased risk of T1D. Some of the most notable genes include: INS: The insulin gene (INS) on chromosome 11p15 is one of the earliest identified non-HLA genes linked to T1D. A variable number tandem repeat (VNTR) polymorphism in the promoter region of the insulin gene affects its thymic expression, with certain alleles reducing the ability to develop immune tolerance to insulin, a key autoantigen in T1D. PTPN22: This gene encodes a protein tyrosine phosphatase involved in T-cell receptor signaling. A common single nucleotide polymorphism (SNP), R620W, in the PTPN22 gene is associated with an increased risk of T1D and other autoimmune diseases, suggesting its role in modulating immune responses. IL2RA: The interleukin-2 receptor alpha (IL2RA) gene, located on chromosome 10p15, plays a crucial role in regulating immune tolerance and T-cell activation. Variants in IL2RA affect the susceptibility to T1D by altering the function of regulatory T-cells, which help maintain immune homeostasis. CTLA4: The cytotoxic T-lymphocyte-associated protein 4 (CTLA4) gene is another immune-related gene associated with T1D. CTLA4 acts as a negative regulator of T-cell activation, and certain variants are linked to impaired immune regulation and a higher risk of autoimmunity. T1D is considered a polygenic disease, meaning that multiple genes contribute to its development. While individual genes confer varying degrees of risk, it is the combination of several genetic factors, along with environmental triggers, that ultimately leads to disease onset. Family studies show that T1D has a relatively high heritability, with siblings of affected individuals having about a 6–10% risk of developing the disease, compared to a 0.3% risk in the general population. The risk of T1D is also influenced by the presence of affected first-degree relatives. For instance, children of fathers with T1D have a higher risk of developing the disease compared to children of mothers with T1D. Monozygotic (identical) twins have a concordance rate of about 30–50%, highlighting the importance of both genetic and environmental factors in disease onset. Recent research has also focused on the role of epigenetics and gene-environment interactions in T1D development. Environmental factors such as viral infections, early childhood diet, and gut microbiome composition are thought to trigger the autoimmune process in genetically susceptible individuals. Epigenetic modifications, such as DNA methylation and histone modifications, may influence gene expression in response to these environmental triggers, further modulating the risk of developing T1D. While much progress has been made in understanding the genetic basis of T1D, ongoing research aims to unravel the complex interplay between genetic susceptibility, immune regulation, and environmental influences that contribute to disease pathogenesis. Virally induced Viral infections play a role in the development of a number of autoimmune diseases, including human type 1 diabetes. However, the mechanisms by which viruses are involved in the induction of type 1 DM are not fully understood. Virus-induced models are used to study the etiology and pathogenesis of the disease, in particular the mechanisms by which environmental factors contribute to or protect against the occurrence of type 1 DM. Among the most commonly used are coxsackievirus, lymphocytic choriomeningitis virus, encephalomyocarditis virus, and Kilham rat virus. Examples of virus-induced animals include NOD mice infected with coxsackie B4 that developed type 1 DM within two weeks.
Biology and health sciences
Specific diseases
Health
20254750
https://en.wikipedia.org/wiki/Syncope%20%28medicine%29
Syncope (medicine)
Syncope (, commonly known as fainting or passing out, is a loss of consciousness and muscle strength characterized by a fast onset, short duration, and spontaneous recovery. It is caused by a decrease in blood flow to the brain, typically from low blood pressure. There are sometimes symptoms before the loss of consciousness such as lightheadedness, sweating, pale skin, blurred vision, nausea, vomiting, or feeling warm. Syncope may also be associated with a short episode of muscle twitching. Psychiatric causes can also be determined when a patient experiences fear, anxiety, or panic; particularly before a stressful event, usually medical in nature. When consciousness and muscle strength are not completely lost, it is called presyncope. It is recommended that presyncope be treated the same as syncope. Causes range from non-serious to potentially fatal. There are three broad categories of causes: heart or blood vessel related; reflex, also known as neurally mediated; and orthostatic hypotension. Issues with the heart and blood vessels are the cause in about 10% and typically the most serious while neurally mediated is the most common. Heart related causes may include an abnormal heart rhythm, problems with the heart valves or heart muscle and blockages of blood vessels from a pulmonary embolism or aortic dissection among others. Neurally mediated syncope occurs when blood vessels expand and heart rate decreases inappropriately. This may occur from either a triggering event such as exposure to blood, pain, strong feelings or a specific activity such as urination, vomiting, or coughing. Neurally mediated syncope may also occur when an area in the neck known as the carotid sinus is pressed. The third type of syncope is due to a drop in blood pressure when changing position such as when standing up. This is often due to medications that a person is taking but may also be related to dehydration, significant bleeding or infection. There also seems to be a genetic component to syncope. A medical history, physical examination, and electrocardiogram (ECG) are the most effective ways to determine the underlying cause. The ECG is useful to detect an abnormal heart rhythm, poor blood flow to the heart muscle and other electrical issues, such as long QT syndrome and Brugada syndrome. Heart related causes also often have little history of a prodrome. Low blood pressure and a fast heart rate after the event may indicate blood loss or dehydration, while low blood oxygen levels may be seen following the event in those with pulmonary embolism. More specific tests such as implantable loop recorders, tilt table testing or carotid sinus massage may be useful in uncertain cases. Computed tomography (CT) is generally not required unless specific concerns are present. Other causes of similar symptoms that should be considered include seizure, stroke, concussion, low blood oxygen, low blood sugar, drug intoxication and some psychiatric disorders among others. Treatment depends on the underlying cause. Those who are considered at high risk following investigation may be admitted to hospital for further monitoring of the heart. Syncope affects about three to six out of every thousand people each year. It is more common in older people and females. It is the reason for one to three percent of visits to emergency departments and admissions to hospital. Up to half of women over the age of 80 and a third of medical students describe at least one event at some point in their lives. Of those presenting with syncope to an emergency department, about 4% died in the next 30 days. The risk of a poor outcome, however, depends very much on the underlying cause. Causes Causes range from non-serious to potentially fatal. There are three broad categories of causes: heart or blood vessel related; reflex, also known as neurally mediated; and orthostatic hypotension. Issues with the heart and blood vessels are the cause in about 10% and typically the most serious while neurally mediated is the most common. There also seems to be a genetic component to syncope. A recent genetic study has identified first risk locus for syncope and collapse. The lead genetic variant, residing at chromosome 2q31.1, is an intergenic variant approximately 250 kb downstream of the ZNF804A gene. The variant affected the expression of ZNF804A, making this gene the strongest driver of the association. Neurally mediated syncope Reflex syncope or neurally mediated syncope occurs when blood vessels expand and heart rate decreases inappropriately leading to poor blood flow to the brain. This may occur from either a triggering event such as exposure to blood, pain, strong feelings, or a specific activity such as urination, vomiting, or coughing. Vasovagal syncope Vasovagal (situational) syncope is one of the most common types which may occur in response to any of a variety of triggers, such as scary, embarrassing or uneasy situations, during blood drawing, or moments of sudden unusually high stress. There are many different syncope syndromes which all fall under the umbrella of vasovagal syncope related by the same central mechanism. First, the person is usually predisposed to decreased blood pressure by various environmental factors. A lower than expected blood volume, for instance, from taking a low-salt diet in the absence of any salt-retaining tendency. Or heat causing vaso-dilation and worsening the effect of the relatively insufficient blood volume. The next stage is the adrenergic response. If there is underlying fear or anxiety (e.g., social circumstances), or acute fear (e.g., acute threat, needle phobia), the vaso-motor centre demands an increased pumping action by the heart (flight or fight response). This is set in motion via the adrenergic (sympathetic) outflow from the brain, but the heart is unable to meet requirements because of the low blood volume, or decreased return. A feedback response to the medulla is triggered via the afferent vagus nerve. The high (ineffective) sympathetic activity is thereby modulated by vagal (parasympathetic) outflow leading to excessive slowing of heart rate. The abnormality lies in this excessive vagal response causing loss of blood flow to the brain. The tilt-table test typically evokes the attack. Avoiding what brings on the syncope and possibly greater salt intake is often all that is needed. Associated symptoms may be felt in the minutes leading up to a vasovagal episode and are referred to as the prodrome. These consist of light-headedness, confusion, pallor, nausea, salivation, sweating, tachycardia, blurred vision, and sudden urge to defecate among other symptoms. Vasovagal syncope can be considered in two forms: Isolated episodes of loss of consciousness, unheralded by any warning symptoms for more than a few moments. These tend to occur in the adolescent age group and may be associated with fasting, exercise, abdominal straining, or circumstances promoting vaso-dilation (e.g., heat, alcohol). The subject is invariably upright. The tilt-table test, if performed, is generally negative. Recurrent syncope with complex associated symptoms. This is neurally mediated syncope (NMS). It is associated with any of the following: preceding or succeeding sleepiness, preceding visual disturbance ("spots before the eyes"), sweating, lightheadedness. The subject is usually but not always upright. The tilt-table test, if performed, is generally positive. It is relatively uncommon. Syncope has been linked with psychological triggers. This includes fainting in response to the sight or thought of blood, needles, pain, and other emotionally stressful situations. One theory in evolutionary psychology is that fainting at the sight of blood might have evolved as a form of playing dead which increased survival from attackers and might have slowed blood loss in a primitive environment. "Blood-injury phobia", as this is called, is experienced by about 15% of people. It is often possible to manage these symptoms with specific behavioral techniques. Another evolutionary psychology view is that some forms of fainting are non-verbal signals that developed in response to increased inter-group aggression during the paleolithic. A non-combatant who has fainted signals that they are not a threat. This would explain the association between fainting and stimuli such as bloodletting and injuries seen in blood-injection-injury type phobias such as needle phobia as well as the gender differences. Much of this pathway was discovered in animal experiments by Bezold (Vienna) in the 1860s. In animals, it may represent a defense mechanism when confronted by danger ("playing possum"). A 2023 study identified neuropeptide Y receptor Y2 vagal sensory neurons (NPY2R VSNs) and the periventricular zone (PVZ) as a coordinated neural network participating in the cardioinhibitory Bezold–Jarisch reflex (BJR) regulating fainting and recovery. Situational syncope Syncope may be caused by specific behaviors including coughing, urination, defecation, vomiting, swallowing (deglutition), and following exercise. Manisty et al. note: "Deglutition syncope is characterised by loss of consciousness on swallowing; it has been associated not only with ingestion of solid food, but also with carbonated and ice-cold beverages, and even belching." Fainting can occur in "cough syncope" following severe fits of coughing, such as that associated with pertussis or "whooping cough". Neurally mediated syncope may also occur when an area in the neck known as the carotid sinus is pressed. A normal response to carotid sinus massage is reduction in blood pressure and slowing of the heart rate. Especially in people with hypersensitive carotid sinus syndrome this response can cause syncope or presyncope. Cardiac Heart-related causes may include an abnormal heart rhythm, problems with the heart valves or heart muscle, or blockages of blood vessels from a pulmonary embolism or aortic dissection, among others. Cardiac arrhythmias The most common cause of cardiac syncope is cardiac arrhythmia (abnormal heart rhythm) wherein the heart beats too slowly, too rapidly, or too irregularly to pump enough blood to the brain. Some arrhythmias can be life-threatening. Two major groups of arrhythmias are bradycardia and tachycardia. Bradycardia can be caused by heart blocks. Tachycardias include SVT (supraventricular tachycardia) and VT (ventricular tachycardia). SVT does not cause syncope except in Wolff-Parkinson-White syndrome. Ventricular tachycardia originate in the ventricles. VT causes syncope and can result in sudden death. Ventricular tachycardia, which describes a heart rate of over 100 beats per minute with at least three irregular heartbeats as a sequence of consecutive premature beats, can degenerate into ventricular fibrillation, which is rapidly fatal without cardiopulmonary resuscitation (CPR) and defibrillation. Long QT syndrome can cause syncope when it sets off ventricular tachycardia or torsades de pointes. The degree of QT prolongation determines the risk of syncope. Brugada syndrome also commonly presents with syncope secondary to arrhythmia. Typically, tachycardic-generated syncope is caused by a cessation of beats following a tachycardic episode. This condition, called tachycardia-bradycardia syndrome, is usually caused by sinoatrial node dysfunction or block or atrioventricular block. Obstructive cardiac lesion Blockages in major vessels or within the heart can also impede blood flow to the brain. Aortic stenosis and mitral stenosis are the most common examples. Major valves of the heart become stiffened and reduce the efficiency of the hearts pumping action. This may not cause symptoms at rest but with exertion, the heart is unable to keep up with increased demands leading to syncope. Aortic stenosis presents with repeated episodes of syncope. Rarely, cardiac tumors such as atrial myxomas can also lead to syncope. Structural cardiopulmonary disease Diseases involving the shape and strength of the heart can be a cause of reduced blood flow to the brain, which increases risk for syncope. The most common cause in this category is fainting associated with an acute myocardial infarction or ischemic event. The faint in this case is primarily caused by an abnormal nervous system reaction similar to the reflex faints. Women are significantly more likely to experience syncope as a presenting symptom of a myocardial infarction. In general, faints caused by structural disease of the heart or blood vessels are particularly important to recognize, as they are warning of potentially life-threatening conditions. Among other conditions prone to trigger syncope (by either hemodynamic compromise or by a neural reflex mechanism, or both), some of the most important are hypertrophic cardiomyopathy, acute aortic dissection, pericardial tamponade, pulmonary embolism, aortic stenosis, and pulmonary hypertension. Other cardiac causes Sick sinus syndrome, a sinus node dysfunction, causing alternating bradycardia and tachycardia. Often there is a long pause (asystole) between heartbeats. Adams-Stokes syndrome is a cardiac syncope that occurs with seizures caused by complete or incomplete heart block. Symptoms include deep and fast respiration, weak and slow pulse, and respiratory pauses that may last for 60 seconds. Subclavian steal syndrome arises from retrograde (reversed) flow of blood in the vertebral artery or the internal thoracic artery, due to a proximal stenosis (narrowing) and/or occlusion of the subclavian artery. Symptoms such as syncope, lightheadedness, and paresthesias occur while exercising the arm on the affected side (most commonly the left). Aortic dissection (a tear in the aorta) and cardiomyopathy can also result in syncope. Various medications, such as beta blockers, may cause bradycardia induced syncope. A pulmonary embolism can cause obstructed blood vessels and is the cause of syncope in less than 1% of people who present to the emergency department. Blood pressure Orthostatic (postural) hypotensive syncope is caused primarily by an excessive drop in blood pressure when standing up from a previous position of lying or sitting down. When the head is elevated above the feet the pull of gravity causes blood pressure in the head to drop. This is sensed by stretch receptors in the walls of vessels in the carotid sinus and aortic arch. These receptors then trigger a sympathetic nervous response to compensate and redistribute blood back into the brain. The sympathetic response causes peripheral vasoconstriction and increased heart rate. These together act to raise blood pressure back to baseline. Apparently healthy individuals may experience minor symptoms ("lightheadedness", "greying-out") as they stand up if blood pressure is slow to respond to the stress of upright posture. If the blood pressure is not adequately maintained during standing, faints may develop. However, the resulting "transient orthostatic hypotension" does not necessarily signal any serious underlying disease. It is as common or perhaps even more common than vasovagal syncope. This may be due to medications, dehydration, significant bleeding or infection. The most susceptible individuals are elderly frail individuals, or persons who are dehydrated from hot environments or inadequate fluid intake. For example, medical students would be at risk for orthostatic hypotensive syncope while observing long surgeries in the operating room. There is also evidence that exercise training can help reduce orthostatic intolerance. More serious orthostatic hypotension is often the result of certain commonly prescribed medications such as diuretics, β-adrenergic blockers, other anti-hypertensives (including vasodilators), and nitroglycerin. In a small percentage of cases, the cause of orthostatic hypotensive faints is structural damage to the autonomic nervous system due to systemic diseases (e.g., amyloidosis or diabetes) or in neurological diseases (e.g., Parkinson's disease). Hyperadrenergic orthostatic hypotension refers to an orthostatic drop in blood pressure despite high levels of sympathetic adrenergic response. This occurs when a person with normal physiology is unable to compensate for >20% loss in intravascular volume. This may be due to blood loss, dehydration or third-spacing. On standing the person will experience reflex tachycardia (at least 20% increased over supine) and a drop in blood pressure. Hypoadrenergic orthostatic hypotension occurs when the person is unable to sustain a normal sympathetic response to blood pressure changes during movement despite adequate intravascular volume. There is little to no compensatory increase in heart rate or blood pressure when standing for up to 10 minutes. This is often due to an underlying disorder or medication use and is accompanied by other hypoadrenergic signs. Central nervous system ischemia The central ischemic response is triggered by an inadequate supply of oxygenated blood in the brain. Common examples include strokes and transient ischemic attacks. While these conditions often impair consciousness they rarely meet the medical definition of syncope. Vertebrobasilar transient ischemic attacks may produce true syncope as a symptom. The respiratory system may compensate for dropping oxygen levels through hyperventilation, though a sudden ischemic episode may also proceed faster than the respiratory system can respond. These processes cause the typical symptoms of fainting: pale skin, rapid breathing, nausea, and weakness of the limbs, particularly of the legs. If the ischemia is intense or prolonged, limb weakness progresses to collapse. The weakness of the legs causes most people to sit or lie down if there is time to do so. This may avert a complete collapse, but whether the patient sits down or falls down, the result of an ischaemic episode is a posture in which less blood pressure is required to achieve adequate blood flow. An individual with very little skin pigmentation may appear to have all color drained from his or her face at the onset of an episode. This effect combined with the following collapse can make a strong and dramatic impression on bystanders. Vertebro-basilar arterial disease Arterial disease in the upper spinal cord, or lower brain that causes syncope if there is a reduction in blood supply. This may occur with extending the neck or with use of medications to lower blood pressure. Other causes There are other conditions which may cause or resemble syncope. Seizures and syncope can be difficult to differentiate. Both often present as sudden loss of consciousness and convulsive movements may be present or absent in either. Movements in syncope are typically brief and more irregular than seizures. Akinetic seizures can present with sudden loss of postural tone without associated tonic-clonic movements. Absence of a long post-ictal state is indicative of syncope rather than an akinetic seizure. Some rare forms, such as hair-grooming syncope are of an unknown cause. Subarachnoid hemorrhage may result in syncope. Often this is in combination with sudden, severe headache. It may occur as a result of a ruptured aneurysm or head trauma. Heat syncope occurs when heat exposure causes decreased blood volume and peripheral vasodilatation. Position changes, especially during vigorous exercise in the heat, may lead to decreased blood flow to the brain. Closely related to other causes of syncope related to hypotension (low blood pressure) such as orthostatic syncope. Some psychological conditions (anxiety disorder, somatic symptom disorder, conversion disorder) may cause symptoms resembling syncope. A number of psychological interventions are available. Low blood sugar can be a rare cause of syncope. Narcolepsy may present with sudden loss of consciousness similar to syncope. Diagnostic approach A medical history, physical examination, and electrocardiogram (ECG) are the most effective ways to determine the underlying cause of syncope. Guidelines from the American College of Emergency Physicians and American Heart Association recommend a syncope workup include a thorough medical history, physical exam with orthostatic vitals, and a 12-lead ECG. The ECG is useful to detect an abnormal heart rhythm, poor blood flow to the heart muscle and other electrical issues, such as long QT syndrome and Brugada syndrome. Heart related causes also often have little history of a prodrome. Low blood pressure and a fast heart rate after the event may indicate blood loss or dehydration, while low blood oxygen levels may be seen following the event in those with pulmonary embolism. Routine broad panel laboratory testing detects abnormalities in <2–3% of results and is therefore not recommended. Based on this initial workup many physicians will tailor testing and determine whether a person qualifies as 'high-risk', 'intermediate risk' or 'low-risk' based on risk stratification tools. More specific tests such as implantable loop recorders, tilt table testing or carotid sinus massage may be useful in uncertain cases. Computed tomography (CT) is generally not required unless specific concerns are present. Other causes of similar symptoms that should be considered include seizure, stroke, concussion, low blood oxygen, low blood sugar, drug intoxication and some psychiatric disorders among others. Treatment depends on the underlying cause. Those who are considered at high risk following investigation may be admitted to hospital for further monitoring of the heart. A hemoglobin count may indicate anemia or blood loss. However, this has been useful in only about 5% of people evaluated for fainting. The tilt table test is performed to elicit orthostatic syncope secondary to autonomic dysfunction (neurogenic). A number of factors make a heart related cause more likely including age over 35, prior atrial fibrillation, and turning blue during the event. Electrocardiogram Electrocardiogram (ECG) finds that should be looked for include signs of heart ischemia, arrhythmias, atrioventricular blocks, a long QT, a short PR, Brugada syndrome, signs of hypertrophic obstructive cardiomyopathy (HOCM), and signs of arrhythmogenic right ventricular dysplasia (ARVD/C). Signs of HCM include large voltages in the precordial leads, repolarization abnormalities, and a wide QRS with a slurred upstroke. Signs of ARVD/C include T wave inversion and epsilon waves in lead V1 to V3. It is estimated that from 20 to 50% of people have an abnormal ECG. However, while an ECG may identify conditions such as atrial fibrillation, heart block, or a new or old heart attack, it typically does not provide a definite diagnosis for the underlying cause for fainting. Sometimes, a Holter monitor may be used. This is a portable ECG device that can record the wearer's heart rhythms during daily activities over an extended period of time. Since fainting usually does not occur upon command, a Holter monitor can provide a better understanding of the heart's activity during fainting episodes. For people with more than two episodes of syncope and no diagnosis on "routine testing", an insertable cardiac monitor might be used. It lasts 28–36 months and is inserted just beneath the skin in the upper chest area. Imaging Echocardiography and ischemia testing may be recommended for cases where initial evaluation and ECG testing is nondiagnostic. For people with uncomplicated syncope (without seizures and a normal neurological exam) computed tomography or MRI is not generally needed. Likewise, using carotid ultrasonography on the premise of identifying carotid artery disease as a cause of syncope also is not indicated. Although sometimes investigated as a cause of syncope, carotid artery problems are unlikely to cause that condition. Additionally an electroencephalogram (EEG) is generally not recommended. A bedside ultrasound may be performed to rule out abdominal aortic aneurysm in people with concerning history or presentation. Differential diagnosis Other diseases which mimic syncope include seizure, low blood sugar, certain types of stroke, and paroxysmal spells. While these may appear as "fainting", they do not fit the strict definition of syncope being a sudden reversible loss of consciousness due to decreased blood flow to the brain. Management Management of syncope focuses on treating the underlying cause. This can be challenging as the underlying cause is unclear in half of all cases. Several risk stratification tools (explained below) have been developed to combat the vague nature of this diagnosis. People with an abnormal ECG reading, history of congestive heart failure, family history of sudden cardiac death, shortness of breath, HCT<30, hypotension or evidence of bleeding should be admitted to the hospital for further evaluation and monitoring. Low-risk cases of vasovagal or orthostatic syncope in younger people with no significant cardiac history, no family history of sudden unexplained death, and a normal EKG and initial evaluation may be candidates for discharge to follow-up with their primary care provider. Recommended acute treatment of vasovagal and orthostatic (hypotension) syncope involves returning blood to the brain by positioning the person on the ground, with legs slightly elevated or sitting leaning forward and the head between the knees for at least 10–15 minutes, preferably in a cool and quiet place. For individuals who have problems with chronic fainting spells, therapy should focus on recognizing the triggers and learning techniques to keep from fainting. At the appearance of warning signs such as lightheadedness, nausea, or cold and clammy skin, counter-pressure maneuvers that involve gripping fingers into a fist, tensing the arms, and crossing the legs or squeezing the thighs together can be used to ward off a fainting spell. After the symptoms have passed, sleep is recommended. Lifestyle modifications are important for treating people experiencing repeated syncopal episodes. Avoiding triggers and situations where loss of consciousness would be seriously hazardous (operating heavy machinery, commercial pilot, etc.) has been shown to be effective. If fainting spells occur often without a triggering event, syncope may be a sign of an underlying heart disease. In the case where syncope is caused by cardiac disease, the treatment is much more sophisticated than that of vasovagal syncope and may involve pacemakers and implantable cardioverter-defibrillators depending on the precise cardiac cause. Risk tools The San Francisco syncope rule was developed to isolate people who have higher risk for a serious cause of syncope. High risk is anyone who has: congestive heart failure, hematocrit <30%, electrocardiograph abnormality, shortness of breath, or systolic blood pressure <90 mmHg. The San Francisco syncope rule however was not validated by subsequent studies. The Canadian syncope risk score was developed to help select low-risk people that may be viable for discharge home. A score of <0 on the Canadian syncope risk score is associated with <2% risk of serious adverse event within 30 days. It has been shown to be more effective than older syncope risk scores even combined with cardiac biomarkers at predicting adverse events. Epidemiology There are 18.1–39.7 syncope episodes per 1000 people in the general population. Rates are highest between the ages of 10–30 years old. This is likely because of the high rates of vasovagal syncope in the young adult population. Older adults are more likely to have orthostatic or cardiac syncope. Syncope affects about three to six out of every thousand people each year. It is more common in older people and females. It is the reason for 2–5% of visits to emergency departments and admissions to hospital. Up to half of women over the age of 80 and a third of medical students describe at least one event at some point in their lives. Prognosis Of those presenting with syncope to an emergency department, about 4% died in the next 30 days. The risk of a poor outcome, however, depends very much on the underlying cause. Situational syncope is not at increased risk of death or adverse outcomes. Cardiac syncope is associated with worse prognosis compared to noncardiac syncope. Factors associated with poor outcomes include history of heart failure, history of myocardial infarction, ECG abnormalities, palpitations, signs of hemorrhage, syncope during exertion, and advanced age. Society and culture Fainting in women was a commonplace trope or stereotype in Victorian England and in contemporary and modern depictions of the period. Syncope and presyncope are common in young athletes. In 1990 the American college basketball player Hank Gathers suddenly collapsed and died during a televised intercollegiate basketball game. He had previously collapsed during a game a few months prior. He was diagnosed with exercise-induced ventricular tachycardia at the time. There was speculation that he had since stopped taking the prescribed medications on game days. Falling-out is a culture-bound syndrome primarily reported in the southern United States and the Caribbean. Etymology The term is derived from the Late Latin syncope, from Ancient Greek συγκοπή (sunkopē) 'cutting up', 'sudden loss of strength', from σύν (sun, "together, thoroughly") and κόπτειν (koptein, "strike, cut off").
Biology and health sciences
Symptoms and signs
Health
20256947
https://en.wikipedia.org/wiki/Flour%20mite
Flour mite
The flour mite, Acarus siro, a pest of stored grains and animal feedstuffs, is one of many species of grain and flour mites. An older name for the species is Tyroglyphus farinae. The flour mite, which is pale greyish white in colour with pink legs, is the most common species of mite in foodstuffs. The males are from long and the female is from long. The flour mites are found in grain and may become exceedingly abundant in poorly stored material. The female produces large clutches of eggs and the life cycle takes just over two weeks. The cast skins and dead bodies can form a fluffy brown material that accumulates under sacks on the warehouse floor. After a while, predatory mites tend to move in, and these keep the flour mites under control. Flour mites that contaminate grains, flour and animal feedstuffs, create allergens in the dust produced, and also transfer pathogenic microorganisms. Foodstuffs acquire a sickly sweet smell and an unpalatable taste. When fed infested feeds, animals show reduced feed intake, diarrhea, inflammation of the small intestine, and impaired growth. Pigs have their live-weight gain, feed-to-gain ratio, and nitrogen retention markedly reduced by infested feeds. Flour mites are intentionally inoculated into Mimolette cheese to improve the flavor. When used for this purpose, they may be referred to as "cheese mites". The mites sometimes bite humans, which can cause an allergic reaction known as Baker's itch.
Biology and health sciences
Arachnids
Animals
34400150
https://en.wikipedia.org/wiki/Tempskya
Tempskya
Tempskya is an extinct genus of tree fern that lived during the Cretaceous period. Fossils have been found across both the Northern and Southern hemispheres. The growth habit of Tempskya was unlike that of any living fern or any other living plant, consisting of multiple conjoined dichotomous branching stems enmeshed within roots that formed a "false trunk". Description The trunk of Tempskya was actually a large collection of stems surrounded by adventitious roots. The false trunks can reach up to in height and up to in diameter. Small leaves grew from various points across the height of the trunk. This is in contrast to most tree ferns, where typically large leaves grow from the top of the trunk. Thin leaves have been discovered for the first time on Tempskya wyomingense specimens; the more commonly seen fossilized leaf bases show that they covered the upper part of the trunk. Hypothesized growth pattern Examination of cross sections of various Tempskya specimens shows that those with the largest trunks have the smallest number of stems, and vice versa. From this, a possible growth pattern of Tempskya has been suggested: at the sporeling stage, Tempskya would consist of a single stem, which would begin to branch off distally. A "mantle" of adventitious roots would then develop around the stems to support them. Later on, many of the stems would begin to decay, while the adventitious roots would still provide support and absorb water for the grown plant. This growth pattern has also been hypothesized for Psaronius. Ecology Tempskya is thought to have grown in lowland environments close to water, like wetlands and riverbanks. Taxonomy The first fossils of Tempskya was originally described in 1824 as the Endogenites erosa by Stokes and Webb, who considered it to be a palm tree. The genus Tempskya was named by August Carl Joseph Corda in 1845, from specimens found in what is now the Czech Republic. The four species originally described by Corda were, in order: Tempskya pulchra, Tempskya macrocaula, Tempskya microrrhiza, and Tempskya schimperi. Tempskya is the sole member of the family Tempskyaceae. The family has been placed in the order "Filicales", which is now split into a number of orders of leptosporangiate ferns. They have been suggested to members of Cyatheales, based on morphological similarities of the petiole and spores to some members of that order. Species Most taxonomists divide Tempskya species into two groups, those with a simple cortex with only a parenchymatous inner cortex without sclerenchyma, while other species have an inner cortex with either discontinuous or continuous layers of sclerenchyma. Parenchymatous inner cortex group: Tempskya iwatensis Nishida, 1986 Taneichi Formation, Japan, Late Cretaceous (Santonian) Tempskya rossica Kidston and Gwynne Vaughan, 1911 Mugadjar, Kazakhstan, Upper Cretaceous? Tempskya uemurae Nishida, 2001 Taneichi Formation, Japan, Late Cretaceous (Santonian) Tempskya dernbachii Tidwell et Wright, 2003 emend. Martínez, Martínez and Olivo, 2015 Mulichinco Formation, Argentina, Early Cretaceous (Valanginian) Tempskya zellerii Ash et Read, Ash and Read, 1976 Mojado Formation, New Mexico, United States, Early Cretaceous (Albian) Tempskya minor Read and Brown, 1937 Bear River Formation, Wyoming, United States, Late Cretaceous Tempskya jonesii Tidwell and Hebbert, 1992 Cedar Mountain Formation, Burro Canyon Formation, Dakota Formation, Utah, United States, Cretaceous (Albian-Cenomanian) Tempskya knowltonii Seward, 1924 Montana and Utah (possibly Kootenai Formation) United States, Early Cretaceous Continuous bands of sclerenchyma in the inner cortex group: Tempskya grandis Read and Brown 1937 Bear River Formation, Wyoming, United States, Late Cretaceous Tempskya superba Arnold, 1958 Dawes County, Nebraska (possibly Lakota Formation) United States, Early Cretaceous? Tempskya reesidei Ash and Read, 1976 Mojado Formation, New Mexico, United States, Early Cretaceous (Albian) Tempskya readii Tidwell and Hebbert 1992 Burro Canyon Formation, Utah, United States, mid Cretaceous Tempskya riojana Barale and Viera, 1989 Enciso Group, Spain, Early Cretaceous (Valanginian to Barremian) Tempskya wesselii Arnold,1945 Kootenai Formation, Montana, Oregon and Utah, United States, Early Cretaceous Tempskya judithae Clifford and Dettmann, 2005 Winton Formation, Australia, Early Cretaceous (Albian) Tempskya stichkae Tidwell and Hebbert,1992 Cedar Mountain Formation, Burro Canyon Formation, Utah, United States, Early Cretaceous (Albian) Tempskya wyomingensis Arnold, 1945 Utah, Wyoming and Colorado, United States, Lower Cretaceous? Tempskya zhangii Yang, Liu et Cheng, 2017 Songliao Basin, Heliongjiang, China, Cretaceous Incertae sedis: Tempskya pulchra Corda, 1845 Czech Republic, Cretaceous Tempskya schimperi Corda, 1845 Germany, Czech Republic, France, Lower Cretaceous Tempskya varians Velenovsky, 1888 Czech Republic, Cenomanian (?) Tempskya cretacea Hosius and Marck, 1880, Westfalia, Germany, Late Cretaceous Tempskya erosa ,Stokes et Webb, 1915, Lower Greensand Group, England, Lower Cretaceous Tempskya whitei Berry, 1991, Patapsco Formation, Maryland, United States, Lower Cretaceous Fossil sites Tempskya finds were thought to be exclusive to the Northern Hemisphere until specimens were discovered in Argentina and Australia, in 2003 and 2005, respectively. Tempskya fossils have also been discovered in the Czech Republic ,2002 and Japan ,1986.
Biology and health sciences
Ferns
Plants
34408890
https://en.wikipedia.org/wiki/WeChat
WeChat
WeChat or Weixin in Chinese () is a Chinese instant messaging, social media, and mobile payment app developed by Tencent. First released in 2011, it became the world's largest standalone mobile app in 2018 with over 1 billion monthly active users. WeChat has been described as China's "app for everything" and a super-app because of its wide range of functions. WeChat provides text messaging, hold-to-talk voice messaging, broadcast (one-to-many) messaging, video conferencing, video games, mobile payment, sharing of photographs and videos and location sharing. Accounts registered using Chinese phone numbers are managed under the Weixin brand, and their data is stored in mainland China and subject to Weixin's terms of service and privacy policy, which forbids content which "endanger[s] national security, divulge[s] state secrets, subvert[s] state power and undermine[s] national unity". Non-Chinese numbers are registered under WeChat, and WeChat users are subject to a different, less strict terms of service and stricter privacy policy, and their data is stored in the Netherlands for users in the European Union, and in Singapore for other users. User activity on Weixin, the Chinese version of the app, is analyzed, tracked and shared with Chinese authorities upon request as part of the mass surveillance network in China. Chinese-registered Weixin accounts censor politically sensitive topics. Any interactions between Weixin and WeChat users are subject to the terms of service and privacy policies of both services. History By 2010, Tencent had already attained a massive user base with their desktop messenger app QQ. Recognizing smart phones were likely to disrupt this status quo, CEO Pony Ma sought to proactively invest in alternatives to their own QQ messenger app. WeChat began as a project at Tencent Guangzhou Research and Project center in October 2010. The original version of the app was created by Allen Zhang, named "Weixin" () by Pony Ma, and launched in 2011. The user adoption of WeChat was initially very slow, with users wondering why key features were missing; however, after the release of the Walkie-talkie-like voice messaging feature in May of that year, growth surged. By 2012, when the number of users reached 100 million, Weixin was re-branded "WeChat" by President Martin Lau for the international market. During a period of government support of e-commerce development—for example in the 12th five-year plan (2011–2015)—WeChat also saw new features enabling payments and commerce in 2013, which saw massive adoption after their virtual Red envelope promotion for Chinese New Year 2014. WeChat had over 889 million monthly active users by 2016, and as of 2019 WeChat's monthly active users had risen to an estimate of one billion. As of January 2022, it was reported that WeChat has more than 1.2 billion users. After the launch of WeChat payment in 2013, its users reached 400 million the next year, 90 percent of whom were in China. By comparison, Facebook Messenger and WhatsApp had about one billion monthly active users in 2016 but did not offer most of the other services available on WeChat. For example, in Q2 2017, WeChat's revenues from social media advertising were about US$0.9 billion (RMB6 billion) compared with Facebook's total revenues of US$9.3 billion, 98% of which were from social media advertising. WeChat's revenues from its value-added services were US$5.5 billion. By 2018, WeChat had been used by 93.5% of Chinese internet users. In response to a border dispute between India and China, WeChat was banned in India in June 2020 along with several other Chinese apps, including TikTok. U.S. president Donald Trump sought to ban U.S. "transactions" with WeChat through an executive order but was blocked by a preliminary injunction issued in the United States District Court for the Northern District of California in September 2020. Joe Biden officially dropped Trump's efforts to ban WeChat in the U.S. in June 2021. Features Messaging WeChat provides a variety of features including text messaging, hold-to-talk voice messaging, broadcast (one-to-many) messaging, video calls and conferencing, video games, photograph and video sharing, as well as location sharing. WeChat also allows users to exchange contacts with people nearby via Bluetooth, as well as providing various features for contacting people at random if desired (if people are open to it). It can also integrate with other social networking services such as Facebook and Tencent QQ. Photographs may also be embellished with filters and captions, and automatic translation service is available and could also translate the conversation during messaging. WeChat supports different instant messaging methods, including text messages, voice messages, walkie talkie, and stickers. Users can send previously saved or live pictures and videos, profiles of other users, coupons, lucky money packages, or current GPS locations with friends either individually or in a group chat. WeChat also provides a message recall feature to allow users to recall and withdraw information (e.g. Images, documents) that are sent within 2 minutes in a conversation. To use this feature, users can select the message or file to be recalled by long pressing. In the menu that appears select 'recall' and 'ok' to complete the withdrawal process. Eventually, the selected messages or files will be removed from the WeChat chatting box on both the sender's and recipient's phones. WeChat also provides a voice-to-text feature that brings convenience when it is not convenient to listen to voice messages, as well as the basic ability to recognize emojis based on different tones of voice, but the privacy leaks involved are also thought-provoking. A distance sensing feature is implemented in WeChat. It has the ability to activate the receivers' hold-to-talk function when the phone was brought in close proximity to the ear. After the receiver was held at a certain distance from the ear, the sensor would then proceed to automatically disable the phone speakers. This feature eliminates the risk of the user's voice messages being inadvertently broadcast to the general public. Public accounts WeChat users can register as a public account (), which enables them to push feeds to subscribers, interact with subscribers, and provide subscribers with services. Users can also create an official account, which fall under service, subscription, or enterprise accounts. Once users as individuals or organizations set up a type of account, they cannot change it to another type. By the end of 2014, the number of WeChat official accounts had reached 8 million. Official accounts of organizations can apply to be verified (cost 300 RMB or about US$45). Official accounts can be used as a platform for services such as hospital pre-registrations, or credit card service. To create an official account, the applicant must register with Chinese authorities, which discourages "foreign companies". In April 2022, WeChat announced that it will start displaying the location of users in China everytime they post on a public account. Meanwhile, overseas users on public accounts will also display the country based on their IP address. Moments "Moments" () is WeChat's brand name for its social feed of friends' updates. "Moments" is an interactive platform that allows users to post images, text, and short videos taken by users. It also allows users to share articles and music (associated with QQ Music or other web-based music services). Friends in the contact list can like the content and leave comments, functioning similarly to a private social network. In 2017 WeChat had a policy of a maximum of two advertisements per day per Moments user. Privacy in WeChat works by groups of friends: only the friends from the user's contact are able to view their Moments' contents and comments. The friends of the user will only be able to see the likes and comments from other users only if they are in a mutual friend group. For example, friends from high school are not able to see the comments and likes from friends from a university. When users post their moments, they can separate their friends into a few groups, and they can decide whether this Moment can be seen by particular groups of people. Contents posted can be set to "Private", and then only the user can view it. Unlike Weibo or Instagram, these are only shared to the user's friends. These are unlikely to go viral. Recently, WeChat launched a new foundation that users can choose to top their posts in their own Moments. No matter how long the posts can be viewed set by users, the posts topped by them can be seen all the time. This foundation enable people to mark some important posts that it will be easy to find them. Besides, users can permanently display some posts while ensuring overall privacy. Weixin Pay digital payment services Users who have provided bank account information may use the app to pay bills, order goods and services, transfer money to other users, and pay in stores if the stores have a Weixin payment option. Vetted third parties, known as "official accounts", offer these services by developing lightweight "apps within the app". Users can link their Chinese bank accounts, as well as Visa, MasterCard and JCB. WeChat Pay, officially referred to as Weixin Pay () in China, is a digital wallet service incorporated into Weixin, which allows users to perform mobile payments and send money between contacts. Although users receive immediate notification of the transaction, the Weixin Pay system is not an instant payment instrument, because the funds transfer between counterparts is not immediate. The settlement time depends on the payment method chosen by the customer. All Weixin users have their own Weixin Pay accounts. Users can acquire a balance by linking their Weixin account to their debit cards, or by receiving money from other users. For non-Chinese users of Weixin Pay, an additional identity verification process of providing a photo of a valid ID is required before certain functions of Weixin Pay become available. Users who link their credit card can only make payments to vendors, and cannot use this to top up WeChat balances. Weixin Pay can be used for digital payments, as well as payments from participating vendors. As of March 2016, Weixin Pay had over 300 million users. Weixin Pay's main competitor in China and the market leader in online payments is Alibaba Group's Alipay. Alibaba company founder Jack Ma considered Weixin's red envelope feature to be a "Pearl Harbor moment", as it began to erode Alipay's historic dominance in the online payments industry in China, especially in peer-to-peer money transfer. The success prompted Alibaba to launch its own version of virtual red envelopes in its competing Laiwang service. Other competitors, Baidu Wallet and Sina Weibo, also launched similar features. In 2019 it was reported that Weixin had overtaken Alibaba with 800 million active Weixin mobile payment users versus 520 million for Alibaba's Alipay. However Alibaba had a 54 per cent share of the Chinese mobile online payments market in 2017 compared to Weixin's 37 per cent share. In the same year, Tencent introduced "WeChat Pay HK", a payment service for users in Hong Kong. Transactions are carried out with the Hong Kong dollar. In 2019 it was reported that Chinese users can use WeChat Pay in 25 countries outside China, including, Italy, South Africa and the UK. Enterprise WeChat For work purposes, companies and business communication, a special version of WeChat called WeCom (formally known as Enterprise WeChat (or Qiye Weixin) and WeChat Work before Nov 2020) was launched in 2016. The app was meant to help employees separate work from private life. In addition to the usual chat features, the program let companies and their employees keep track of annual leave days and expenses that need to be reimbursed, employees could ask for time off or clock in to show they were at work. WeChat Mini Program In 2017, WeChat launched a feature called "Mini Programs" (). A mini program is an app within an app. Business owners can create mini apps in the WeChat system, implemented using proprietary versions of CSS, JavaScript, and templated XML JavaScript with proprietary APIs. Users may install these inside the WeChat app. In January 2018, WeChat announced a record of 580,000 mini programs. With one Mini Program, consumers could scan the Quick Response code using their mobile phone at a supermarket counter and pay the bill through the user's WeChat mobile wallet. WeChat Games have received huge popularity, with its "Jump Jump" game attracting 400 million players in less than 3 days and attaining 100 million daily active users in just two weeks after its launch, as of January 2018. Ever since WeChat Mini Program's Launch, the daily active user count of WeChat Mini Programs are increasing dramatically. In 2017, there were only 160 million daily active users, however, the number reached 450 million in 2021. WeChat Channels In 2020, WeChat Channels were launched. They are a short video platform within WeChat that allows users to create and share short video clips and photos to their own WeChat Channel. Users of Channels can also discover content posted to other Channels by others via the in-built feed. Each post can include hashtags, a location tag, a short description, and a link to an WeChat official account article. In September 2021, it was reported that WeChat Channels began allowing users to upload hour-long videos, twice of the duration limit previously imposed on all WeChat Channels videos. Comparisons are often drawn between WeChat Channels and TikTok (or Douyin) for their similarity in features. In January 2022, there were reports that WeChat is set to diversify further and place more emphasis on new products and services like WeChat Channels, amid new regulatory restrictions imposed in China. By June 2021, WeChat Channels had accumulated over 200 million users, and WeChat Channels have 500 million DAU (Daily Active Users), growing at 79% year-on -year. More than 27 million people had used the platform to watch Irish boy band Westlife's online concert in 2021, and 15 million users also viewed the Shenzhou 12 spaceflight launch using the app service. Easy Mode In September 2021, WeChat introduced a brand-new feature on its platform called Easy Mode. It was mainly designed for elderly people with higher readability by providing a larger font size, sharper colours, and bigger buttons. Another feature provided in this update was the ability to listen to text messages. Easy Mode was released in version 8.0.14 for both iOS and Android. Guardian Mode Guardian Mode is a function in WeChat for protecting users under 14 years old. It was introduced to promote safety and provide security environment for WeChat users. After operating the Guardian Mode, the functions of "people nearby", "games", "search" will not be accessible in the interface. The channels function in WeChat, a video mini program, would only show contents suitable for adolescents. Additionally, WeChat users who turn on the Guardian Mode are only able to add friends through QR codes and group chats. Moreover, WeChat users would only be able to view 10 latest Moments posts and would not be able to view the 10 latest Moments posts of non-friend users under the privacy setting of Guardian Mode. Others In January 2016, Tencent launched WeChat Out, a VOIP service allowing users to call mobile phones and landlines around the world. The feature allowed purchasing credit within the app using a credit card. WeChat Out was originally only available in the United States, India, and Hong Kong, but later coverage was expanded to Thailand, Macau, Laos, and Italy. In March 2017, Tencent released WeChat Index. By inserting a search term in the WeChat Index page, users could check the popularity of this term in the past 7, 30, or 90 days. The data was mined from data in official WeChat accounts and metrics such as social sharing, likes and reads were used in the evaluation. In May 2017, Tencent started news feed and search functions for its WeChat app. The Financial Times reported this was a "direct challenge to Chinese search engine Baidu". In 2017, WeChat was reported to be developing an augmented reality (AR) platform as part of its service offering. Its artificial intelligence team was working on a 3D rendering engine to create a realistic appearance of detailed objects in smartphone-based AR apps. They were also developing a simultaneous localization and mapping technology, which would help calculate the position of virtual objects relative to their environment, enabling AR interactions without the need for markers, such as Quick Response codes or special images. Chinese courts allow the parties to communicate with the courts via WeChat, through which parties can file lawsuits, participate in proceedings, present evidence, and listen to verdicts. As of December 2019, more than 3 million parties had used WeChat for litigation. In spring 2020, WeChat users are now able to change their WeChat ID more than once, being allowed to change their username only once per year. Prior to this, a WeChat ID could not be changed more than once. On 17 June 2020, WeChat released a new add-on called "WeChat Nudge". The feature was first introduced in MSN Messenger 7.0, in 2005. The feature was called Buzz in Yahoo! Messenger and the feature had interoperability with MSN Messenger's Nudge. Similar to Messenger and Yahoo, users can access WeChat Nudge by double-clicking on other users' profiles in the chat. This virtually shakes user's profile photo and sends a vibration notification. Both users must have the latest Wechat update. If a user does not have the latest update they will be unable to nudge another user, but can still receive nudges. A user can only nudge another user if they have previous conversations. Newly added friends without previous messages cannot nudge each other. On January 16, 2022, a new version of WeChat has added seven major functions for the iOS 8.0.17, Android 8.0.18 or newer version users. In the function of Personal Information Authority, users can check the number of times personal information has been edited in the past year through the personal information collection list, including head portrait, name, mobile number, gender, region, personalized signature, and address. On March 30, 2022, according to the relevant laws and regulations of China, in order to prevent the risk of publicity stunts in virtual currency transactions, the Wechat public platform standardized the official account and mini program of secondary sales of digital collections. WeChat Business WeChat Business () is one of the latest mobile social network business model after e-commerce, which utilizes business relationships and friendships to maintain a customer relationship. Comparing with the traditional E-business like JD.com and Alibaba, WeChat Business has a large range of influence and profits with less input and lower threshold, which attracts lots of people to join in WeChat business. Marketing modes B2C Mode This is the main profit mode of WeChat Business. The first one is to launch advertisements and provide services through the WeChat Official Account, which is a B2C mode. This mode has been used by many hospitals, banks, fashion brands, internet companies and personal blogs because the Official Account can access online payment, location sharing, voice messages, and mini-games. It is like a 'mini app', so the company has to hire specific staff to manage the account. By 2015, there were more than 100 million WeChat Official Accounts on this platform. B2B Mode WeChat salesperson in this mode is for promoting products by individuals, which belongs to C2C mode. In this mode, individual sellers post relevant photos and messages of their agent products on the WeChat Moments or WeChat groups and sell products to their WeChat friends. Besides, they develop friendships with their customers by sending messages in festivals or write comments under their updates on WeChat moments to increase their trust. Also, continuing to communicate with the regular customers raises the 'WOF' (word-of-mouth) communications, which influences decision-making. Some WeChat businessmen already have an online shop in Taobao, but use WeChat to maintain existing customers. Existing problems As more and more people have joined WeChat Business, it has brought many problems. For example, some sellers have begun to sell counterfeit luxury goods such as bags, clothes and watches. Some sellers have disguised themselves as international flight attendants or overseas students to post fake stylish photos on WeChat Moments. They then claim that they can provide overseas purchasing services but sell counterfeit luxury goods at the same price as the authentic ones. Other popular products selling on WeChat are facial masks. The marketing mode is like that of Amway but most goods are unbranded products which come from illegal factories making excess hormones which could have serious effects on customers' health. However, it is difficult for customers to defend their rights because a large number of sellers' identities are uncertified. Additionally, the lack of any supervision mechanism in WeChat business also provides opportunities for criminals to continue this illegal behavior. In early 2022, WeChat suspended more than a dozen NFT (non-fungible token) public accounts to clean up crypto speculation and scalping. The crackdown on NFT-related content comes from domestic digital collectibles, which cannot be resold for profit. Marketing Campaigns In a 2016 campaign, users could upload a paid photo on "Moments" and other users could pay to see the photo and comment on it. The photos were taken down each night. Collaborations In 2014, Burberry partnered with WeChat to create its own WeChat apps around its fall 2014 runway show, giving users live streams from the shows. Another brand, Michael Kors used WeChat to give live updates from their runway show, and later to run a photo contest "Chic Together WeChat campaign". In 2016, L'Oréal China cooperated with Papi Jiang to promote their products. Over one million people watched her first video promoting L'Oreal's beauty brand MG. In 2016, WeChat partnered with 60 Italian companies (WeChat had an office in Milan) who were able to sell their products and services on the Chinese market without having to get a license to operate a business in China. In 2017, Andrea Ghizzoni, European director of Tencent, said that 95 percent of global luxury brands used WeChat. In 2020 Burberry and WeChat collaborated to design a shop in Shenzhen where Burberry has a flagship store, as well as an app allowing shoppers to interact with the shop digitally. Platforms WeChat's mobile phone app is available only to Android, HarmonyOS and iOS. BlackBerry, Windows Phone, and Symbian phones were supported before. However, as of 22 September 2017, WeChat was no longer working on Windows Phones. The company ceased the development of the app for Windows Phones before the end of 2017. Although Web-based OS X and Windows clients exist, this requires the user to have the app installed on a supported mobile phone for authentication, and neither message roaming nor 'Moments' are provided. Thus, without the app on a supported phone, it is not possible to use the web-based WeChat clients on the computer. The company also provides WeChat for Web, a web-based client with messaging and file transfer capabilities. Other functions cannot be used on it, such as the detection of nearby people, or interacting with Moments or Official Accounts. To use the Web-based client, it is necessary to first scan a QR code using the phone app. This means it is not possible to access the WeChat network if a user does not possess a suitable smartphone with the app installed. WeChat could be accessed on Windows using BlueStacks until December 2014. After that, WeChat blocked Android emulators and accounts that have signed in from emulators may be frozen. There have been some reported issues with the Web client. Specifically when using English, some users have experienced autocorrect, autocomplete, auto-capitalization, and auto-delete behavior as they type messages and even after the message was sent. For example, "gonna" was autocorrected to "go", the E's were auto-deleted in "need", "wechat" was auto-capitalized to "Wechat" but not "WeChat", and after the message was sent, "don't" got auto-corrected to "do not". However, the auto-corrected word(s) after the message was sent appeared on the phone app as the user had originally typed it ("don't" was seen on the phone app whereas "do not" was seen on the Web client). Users could translate a foreign language during a conversation and the words were posted on Moments. WeChat allows group video calls. Controversies State surveillance and intelligence gathering Weixin, the Chinese version of WeChat, operates from China under Chinese law, which includes strong censorship provisions and interception protocols. Its parent company is obliged to share data with the Chinese government under the China Internet Security Law and National Intelligence Law. Weixin can access and expose the text messages, contact books, and location histories of its users. Due to Weixin's popularity, the Chinese government uses Weixin as a data source to conduct mass surveillance in China. Some states and regions such as India, Australia the United States, and Taiwan fear that the app poses a threat to national or regional security for various reasons. In June 2013, the Indian Intelligence Bureau flagged WeChat for security concerns. India has debated whether or not they should ban WeChat for the possibility that too much personal information and data could be collected from its users. In Taiwan, legislators were concerned that the potential exposure of private communications was a threat to regional security. In 2016, Tencent was awarded a score of zero out of 100 in an Amnesty International report ranking technology companies on the way they implement encryption to protect the human rights of their users. The report placed Tencent last out of a total of 11 companies, including Facebook, Apple, and Google, for the lack of privacy protections built into Weixin and QQ. The report found that Tencent did not make use of end-to-end encryption, which is a system that allows only the communicating users to read the messages. It also found that Tencent did not recognize online threats to human rights, did not disclose government requests for data, and did not publish specific data about its use of encryption. A September 2017 update to the platform's privacy policy detailed that log data collected by Weixin included search terms, profiles visited, and content that had been viewed within the app. Additionally, metadata related to the communications between Weixin users—including call times, duration, and location information—was also collected. This information, which was used by Tencent for targeted advertising and marketing purposes, might be disclosed to representatives of the Chinese government: To comply with an applicable law or regulations. To comply with a court order, subpoena, or other legal process. In response to a request by a government authority, law enforcement agency, or similar body. In May 2020, Citizen Lab published a study which claimed that WeChat monitors foreign chats to hone its censorship algorithms. On August 14, 2020, Radio Free Asia reported that in 2019, Gao Zhigang, a citizen of Taiyuan city, Shanxi Province, China, used Weixin to forward a video to his friend Geng Guanjun in USA. Gao was later convicted on the charge of the crime of picking quarrels and provoking trouble, and sentenced to ten-months imprisonment. The Court documents show that China's network management and propaganda departments directly monitor Weixin users, and the Chinese police used big data facial technology to identify Geng Guanjun as an overseas democracy activist. In September 2020, Chevron Corporation mandated that its employees delete WeChat from company-issued phones. Privacy issues Users inside and outside of China also have expressed concern about the privacy issues of the app. Human rights activist Hu Jia was jailed for three years for sedition. He speculated that the officials of the Internal Security Bureau of the Ministry of Public Security listened to his voicemail messages that were directed to his friends, repeating the words displayed within the voice mail messages to Hu. Chinese authorities have further accused the Weixin app of threatening individual safety. China Central Television (CCTV), a state-run broadcaster, featured a piece in which Weixin was described as an app that helped criminals due to its location-reporting features. CCTV gave an example of such accusations through reporting the murder of a single woman who, after he attempted to rob her, was murdered by a man she met on Weixin. The location-reporting feature, according to reports, was the reason for the man knowing the victim's whereabouts. Authorities within China have linked Weixin to numerous crimes. The city of Hangzhou, for example, reported over twenty crimes related to Weixin in the span of three months. XcodeGhost malware In 2015, Apple published a list of the top 25 most popular apps infected with the XcodeGhost malware, confirming earlier reports that version 6.2.5 of WeChat for iOS was infected with it. The malware originated in a counterfeit version of Xcode (dubbed "XcodeGhost"), Apple's software development tools, and made its way into the compiled app through a modified framework. Despite Apple's review process, WeChat and other infected apps were approved and distributed through the App Store. Even though the cybersecurity company Palo Alto Networks claims that the malware was capable of prompting the user for their account credentials, opening URLs and reading the device's clipboard, Apple responded that the malware was not capable of doing "anything malicious" or transmitting any personally identifiable information beyond "apps and general system information" and that it had no information that suggested that this had happened. In 2015 internet security company Malwarebytes considered this to be the largest security breach in the App Store's history. Ban in India In June 2020, the Government of India banned WeChat along with 58 other Chinese apps citing data and privacy issues, in response to a border clash between India and China earlier in the year. The banned Chinese apps were "stealing and surreptitiously transmitting users’ data in an unauthorized manner to servers which have locations outside India," and was "hostile to national security and defense of India", claimed India's Ministry of Electronics and Information Technology. Previous ban in Russia On 6 May 2017, Russia blocked access to WeChat for failing to give its contact details to the Russian communications watchdog. The ban was swiftly lifted on 11 May 2017 after Tencent provided "relevant information" for registration to Roskomnadzor. In March 2023, Russia banned government officials from using messaging apps operated by foreign companies, including WeChat. Ban and injunction against ban in the United States On August 6, 2020, U.S. President Donald Trump signed an executive order, invoking the International Emergency Economic Powers Act, seeking to ban WeChat in the U.S. in 45 days, due to its connections with the Chinese-owned Tencent. This was signed alongside a similar executive order targeting TikTok and its Chinese-owned ByteDance. The Department of Commerce issued orders on September 18, 2020, to enact the ban on WeChat and TikTok by the end of September 20, 2020, citing national security and data privacy concerns. The measures ban the transferring of funds or processing through WeChat in the U.S. and ban any company from offering hosting, content delivery networks or internet transit to WeChat. Magistrate Judge Laurel Beeler of the United States District Court for the Northern District of California issued a preliminary injunction blocking the Department of Commerce order on both TikTok and WeChat on September 20, 2020, based on respective lawsuits filed by TikTok and US WeChat Users Alliance, citing the merits of the plaintiffs' First Amendment claims. The Justice Department had previously asked Beeler to not block the order to ban the apps saying it would undermine the presidents ability to deal with threats to national security. In her ruling, Beeler said that while the government had established that Chinese government activities raised significant national security concerns, it showed little evidence that the WeChat ban would address those concerns. On June 9, 2021, U.S. President Joe Biden signed an executive order revoking the ban on WeChat and TikTok. Instead, he directed the commerce secretary to investigate foreign influence enacted through the apps. Montana banned the installation of WeChat on government devices since June 1, 2023. Ban in Canada In October 2023, Canada banned WeChat on all government devices. Notorious Markets list In 2022, the Office of the United States Trade Representative (USTR) added WeChat's ecommerce ecosystem to its list of Notorious Markets for Counterfeiting and Piracy. In January 2025, USTR removed WeChat from its list of notorious markets. 2023 Australian Indigenous Voice referendum In the lead-up to the 2023 Australian Indigenous Voice referendum, an unsuccessful attempt to enshrine an Indigenous Voice to Parliament in the Constitution, WeChat and other popular Chinese social media platforms were criticised by both Yes and No supporters and by both Chinese and non-Chinese Australians for its excessive amount of misleading content about the referendum, as well as its excessive amount of posts that allegedly promote anti-Indigenous racism. Researchers from Monash University in Melbourne found that less than one in 10 WeChat posts related to the referendum were supportive of the Yes case, most of which were paid advertisements from the official Yes campaign. The study also found that the vast majority of comments on Voice-related WeChat posts were explicitly supportive of the No case. Chinese Australians are a very large minority group in Australia, with many using WeChat as a social media platform. While the usage of Chinese apps such as WeChat in Australia has long been controversial over its potential links to the Chinese government, but it nevertheless is seen as a major social media platform in Australia, directly competing with Western platforms among Chinese speakers in Australia. As voting is compulsory for all Australian citizens over the age of 18, social media advertising is crucial for election campaigns in Australia. Therefore, the significance of the number of No campaign material, some of which even contained misinformation that most No supporters do not agree with, had the potential to sway the votes of Chinese Australians towards the ultimately successful No case. Censorship Censorship of global issues and separation into two separate platforms Starting in 2013, reports arose that Chinese-language searches even outside China were being keyword filtered and then blocked. This occurred on incoming traffic to China from foreign countries but also exclusively between foreign parties (the service had already censored its communications within China). In the international example of blocking, a message was displayed on users' screens: "The message "" your message contains restricted words. Please check it again." These are the Chinese characters for a Guangzhou-based paper called Southern Weekly (or, alternatively, Southern Weekend). The next day Tencent released a statement addressing the issue saying "A small number of WeChat international users were not able to send certain messages due to a technical glitch this Thursday. Immediate actions have been taken to rectify it. We apologize for any inconvenience it has caused to our users. We will continue to improve the product features and technological support to provide a better user experience." WeChat eventually built two different platforms to avoid this problem; one for the Chinese mainland (Weixin) and one for the rest of the world (WeChat). The problem existed because WeChat's servers were all located in China and thus subjected to its censorship rules. Following the overwhelming victory of pro-democracy candidates in the 2019 Hong Kong local elections Weixin censored messages related to the election and disabled the accounts of posters in other countries such as U.S. and Canada. Many of those targeted were of Chinese ancestry. In 2020, Weixin started censoring messages concerning the COVID-19 pandemic. In December 2020 Weixin blocked a post by Australian Prime Minister Scott Morrison during a diplomatic spat between Australia and China. In his Weixin post Morrison had criticized a doctored image posted by a Chinese diplomat and praised the Chinese-Australian community. According to Reuters the company claimed to have blocked the post for "violated regulations, including distorting historical events and confusing the public." Two censorship systems In 2016, the Citizen Lab published a report saying that WeChat was using different censorship policies in mainland China and other areas. They found that: Keyword filtering was only enabled for users who registered via phone numbers from mainland China; Users did not get notices anymore when messages are blocked; Filtering was more strict on group chat; Keywords were not static. Some newfound censored keywords were in response to current news events; The Internal browser in WeChat blocked Chinese accounts from accessing some websites such as gambling, Falun Gong and critical reports on China. International users were not blocked except for accessing some gambling and pornography websites. Later, WeChat was split into Weixin (the Chinese version) and WeChat (the international version) as described in the previous section, with only Weixin being subject to censoring. Accounts registered using Chinese phone numbers are now managed under the Weixin brand, and their data is stored in mainland China and subject to Weixin's terms of service and privacy policy, which forbids content which "endanger[s] national security, divulge[s] state secrets, subvert[s] state power and undermine[s] national unity". Non-Chinese numbers are registered under WeChat, and WeChat users are subject to a different, less strict terms of service and stricter privacy policy, and their data is stored in the Netherlands for users in the European Union, and in Singapore for other users. Censorship in Iran In September 2013, WeChat was blocked in Iran. The Iranian authorities cited WeChat Nearby (Friend Radar) and the spread of pornographic content as the reason of censorship. The Committee for Determining Instances of Criminal Content (a working group under the supervision of the attorney general) website FAQ says: Because WeChat collects phone data and monitors member activity and because app developers are outside of the country and not cooperating, this software has been blocked, so you can use domestic applications for cheap voice calls, video calls and messaging. On 4 January 2018, WeChat was unblocked in Iran. Crackdown on LGBTQ accounts in China On July 6, 2021, several Weixin accounts associated with China's university campuses LGBTQ movement were blocked and then deleted without warning; the official media said they had no knowledge of this. Some of the accounts, which consisted of a mix of registered student clubs and unofficial grassroots groups had operated for years as safe spaces for China's LGBTQ youth, with tens of thousands of followers. Many of the closed Weixin accounts display messages saying that they had "violated" Internet regulations, without giving further details, with account names being deleted and replaced with "unnamed", with a notice claiming that all content was blocked and accounts were suspended after receiving relevant complaints. The U.S. State Department expressed concern that the accounts were deleted when they were merely expressing their views, exercising their right to freedom of expression and freedom of speech. Several groups that had their accounts deleted spoke out against the ban with one stating "[W]e hope to use this opportunity to start again with a continued focus on gender and society, and to embrace courage and love". In August 2023, immediately prior to the Qixi Festival, Weixin launched a mass closure of accounts related to LGBT rights and feminism.
Technology
Social network and blogging
null
3771802
https://en.wikipedia.org/wiki/Acacia%20melanoxylon
Acacia melanoxylon
Acacia melanoxylon, commonly known as the Australian blackwood, is an Acacia species native to south-eastern Australia. The species is also known as blackwood, hickory, mudgerabah, Tasmanian blackwood, or blackwood acacia. The tree belongs to the Plurinerves section of Acacia and is one of the most wide-ranging tree species in eastern Australia and is quite variable mostly in the size and shape of the phyllodes. Description Acacia melanoxylon is able to grow to a height of around and has a bole that is approximately in diameter. It has deeply fissured, dark-grey to black coloured bark that appears quite scaly on older trees. It has angular and ribbed branches. The bark on older trunks is dark greyish-black in colour, deeply fissured and somewhat scaly. Younger branches are glabrous, ribbed and angular to flattened near the greenish coloured tips. The stems of younger plants are occasionally hairy. Like most species of Acacia, it has phyllodes rather than true leaves. The glabrous, glossy, leathery, dark green to greyish-green phyllodes have a length of and a width of with a variable shape. They most often have a narrowly elliptic to lanceolate shape and are straight to slightly curved and often taper near the base and have three to five prominent longitudinal veins. In its native habitat it blooms between July and December producing inflorescences that appear in groups of two to eight on an axillary raceme. The spherical flower heads have a diameter of and contain 30 to 50 densely packed pale yellow to nearly white flowers. Following flowering smooth, firmly papery and glabrous seed pods form. The curved, twisted or coiled pods have a biconvex shape with a length of and a width of and contain longitudinally arranged seeds. Taxonomy The species was first formally described by the botanist Robert Brown in 1813 as a part of the William Aiton work Hortus Kewensis. It was reclassified as Racosperma melanoxylon by Leslie Pedley in 1986 then returned to genus Acacia in 2006. Several other synonyms are known including Acacia arcuata, Mimosa melanoxylon and Acacia melanoxylon var. obtusifolia. Distribution In its native range Acacia melanoxylon is found down the east coast of Australia from Queensland in the north, into New South Wales, through Victoria and west along the south coast of South Australia. It is also found along the east coast of Tasmania. It has become naturalised in Western Australia. In New South Wales it is widespread from coastal areas and into the Great Dividing Range but is not found further inland. It is commonly found at higher altitudes in the Nandewar Range, Liverpool Range and around Orange in the west. It is mostly found as a part of wet sclerophyll forest communities or near cooler rainforest communities. The range of the tree extends from the Atherton Tableland in northern Queensland and follows the coast to around the Mount Lofty Range in South Australia and can grow in a wide range of podsols, especially in sandy loams. Timber Acacia melanoxylon is valued for its decorative timber which may be used in cabinets, musical instruments and in boatbuilding. Appearance Sapwood may range in colour from straw to grey-white with clear demarcation from the heartwood. The heartwood is golden to dark brown with chocolate growth rings. The timber is generally straight grained but may be wavy or interlocked. Quartersawn surfaces may produce an attractive fiddleback figure. The wood is lustrous and possesses a fine to medium texture. The name of the wood may refer to dark stains on the hands of woodworkers, caused by the high levels of tannin in the timber. Properties Acacia melanoxylon timber has a density of approximately 660 kg/m3 and is strong in compression, resistant to impact and is moderately stiff. It is moderately blunting to work with tools and bends well. It may be nailed or screwed with ease, but gluing may produce variable results. The wood is easily stained and produces a high-quality finish. Australian blackwood seasons easily with some possible cupping when boards are inadequately restrained. The timber produces little movement once seasoned. The timber may be attacked by furniture beetles, termites and powder-post beetles (sapwood). It is resistant to effective preservative treatments. Invasive species It has been introduced to many countries for forestry plantings and as an ornamental tree. It now is present in Africa, Asia, Europe, Indian Ocean, the Pacific Ocean, South America and the United States. It is a declared noxious weed species in South Africa and is a pest in Portugal's Azores Islands. It was also recently listed by the California Invasive Plant Council (Cal-IPC) as an invasive weed that may cause limited impact (Knapp 2003). Its use as a street tree is being phased out in some locales because of the damage it often causes to pavements and underground plumbing. In some regions of Tasmania, blackwood is now considered a pest. Uses Indigenous Australians use various parts of this tree in a wide variety of ways. The seed is edible, while the tree's leaves are used as soap or a fishing poison. The bark can be used to make string or a traditional analgesic. The hard timber is used to make clap sticks, spear-throwers and shields. The wood has many uses including wood panels, furniture, fine cabinetry, tools, boats, inlaid boxes and wooden kegs. It is approximately the same quality as walnut, and is well-suited for shaping with steam. The bark has a tannin content of about 20%. It may also be used for producing decorative veneers. This tree can also be used as a fire barrier plant, amongst other plants, in rural situations. Plain and figured Australian blackwood is used in musical instrument making (in particular guitars, drums, Hawaiian ukuleles, violin bows and organ pipes), and in recent years has become increasingly valued as a substitute for koa wood. Gallery
Biology and health sciences
Fabales
Plants
3771988
https://en.wikipedia.org/wiki/Lissachatina%20fulica
Lissachatina fulica
Lissachatina fulica is a species of large land snail that belongs in the subfamily Achatininae of the family Achatinidae. It is also known as the giant African land snail. It shares the common name "giant African snail" with other species of snails such as Achatina achatina and Archachatina marginata. This snail species has been considered a significant cause of pest issues around the world. It is a federally prohibited species in the US, as it is illegal to sell or possess. Internationally, it is the most frequently occurring invasive species of snail. Outside of its native range, this snail thrives in many types of habitat with mild climates. It feeds voraciously and is a vector for plant pathogens, causing severe damage to agricultural crops and native plants. It competes with native snail taxa, is a nuisance pest of urban areas, and spreads human disease. Lissachatina fulica castanea (Lamarck, 1822) Lissachatina fulica coloba (Pilsbry, 1904) Lissachatina fulica hamillei (Petit, 1859) Distribution The species is native to East Africa, but it has been widely introduced to other parts of the world through the pet trade, as a food resource, and by accidental introduction. Within Africa, the snail can be found along the eastern coast of South Africa, extending northward into Somalia. However, some of its distribution into northern African may be due to human introduction, starting in northern Mozambique, Tanzania, Kenya, and extending through Somalia into Ethiopia. The snail has been reported in Morocco, Ghana, and the Ivory Coast as early as the 1980s. In 1961, Albert R. Mead, published the seminal work entitled "The Giant African Snail: A Problem in Economic Malacology". This book compiled known information on the snail, as well as a detailed overview on its global distribution. Prior to 1800, the snail was found in Madagascar, spreading westward to Mauritius, reaching Réunion in 1821, then to Seychelles in 1840. In 1847, they were introduced to India and in 1900 in Sri Lanka. In 1911, the snail was present in northern Malaysia, possibly from India or Myanmar. In 1922, the snail was identified in Singapore although it may have been present as early as 1917. In 1925, the snail was shipped to Java, from which it spread across Indonesia. In 1928, the snail was observed in Sarawak. This species has been found in China since 1931 and its initial point of distribution in China was Xiamen. The snail has also been established on Pratas Island, of Taiwan. The species was established in Hawaii, United States, by 1936. The snail was present in Papua New Guinea by 1946, spreading from New Ireland and New Britain to the mainland by 1976–77. By 1967 the snail was present in Tahiti, spreading through New Caledonia and Vanuatu by 1972 into French Polynesia by 1978, including America Samoa. By 1990, the snail was reported in Samoa and the Federated States of Micronesia in 1998. In 1984, L. fulica was found established in the French West Indies, spreading across Guadeloupe and by 1988 arriving in Martinique. In 2008, populations of L. fulica were reported in Trinidad but were greatly reduced by 2010. In 2014, the snail was reported in Havana, Cuba. In Brazil, the first introduction of L. fulica came in 1988 in Paraná. By 2007, it was recorded in 23 of the 26 Brazilian states. In 2006–08, the snail was recorded in Ecuador, in Pichincha, and may have been present at least 10 years prior in 'snail farms'. The presence of the snail in Colombia was reported by 2008–2009. Although the time of the initial introduction is unknown, it has been registered in all regions of the country by 2012. Live specimens were found in Piura, Peru, around 2008 as well. The snail may be present in Venezuela and was reported in Puerto Iguazú, Argentina in 2010. The species has been observed in Bhutan (Gyelposhing, Mongar), where it is an invasive species since 2006 and their number increased drastically since 2008. In the contiguous United States, the snail had been reported in the state of Florida in 2011, and later in 2021–2022; however, the snail has not established. Description The eggs of Lissachatina fulica are pure white and opaque but may be slightly yellowish or even somewhat transparent. The eggs have a thin, calcareous shell, and are about 5 mm long and 4 mm wide, resembling a white chicken egg. A newly hatched snail is called a neonate. When the giant African land snail hatches, its shell is about 5 to 5.5 mm long, consisting of 2.5 whorls. As the snail grows, its shell extends either clockwise (dextral) or counter-clockwise (sinistral), coiling and creating whirls as the snail ages. Dextral growth is most common. Younger snails will have a vertical pattern on their shell (wrinkles, welts, and criss-crossing patterns) of brown and cream color bands. As the snail grows, the new whorls of its shell will be smooth and glossy, consisting of only a brown color. A fully adult snail is around in diameter and or more in length, making it one of the largest of all extant land snails. An adult snail may be expected to have 7-9 whorls, but this is not necessarily a reliable indicator of age; nor is the width of the snail's peristome (the shell's lip at the aperture or opening of the shell), which was traditionally used to measure age, as it varies as well. While the snail most typically has a brown shell with cream sections at its apex, the shell coloration is highly variable. A buttery yellow body (also called pedal or foot) is possible, rather than the typical brownish grey body and brown shell. This variety is nicknamed the 'white jade snail' in China. The snail also comes in the 'golden' variety, sometimes considered an albino type; with a yellow body and yellow shell. Ecology Habitat Within its native range in Africa, the snail is found along the margins of forests. Within its invasive range it can be found in agricultural areas as well as urban areas. The snails prefer areas that shelter them from light in the daytime and prevent desiccation; examples are leaf litter or piles of debris. It will also climb tree trunks or walls when conditions allow. L. fulica occurs in a wide range of temperate climates, now including most regions of the humid tropics. The snail can tolerate a broad range of soil pH and calcium conditions, although calcium is critical for snail shell development. Relative humidity, such as the amount of rain, is an important factor for snail growth. In fact, the snail's shell growth pattern will reflect rain fall patterns, much like the growth rings of a tree. The species can tolerate temperatures of 0-9 °C (48.2 °F) to 45 °C (113 °F) but thrives in temperatures between 22 and 32 °C (71.6-89.6 °C). When overwintering is necessary, the snail will burrow below the surface and may not lay eggs until temperatures increase to above 15 °C (59 °F). This tactic to avoid extreme conditions is called aestivating. The snail can survive in an aestivation state for up to three years by sealing itself into its shell by secretion of a calcareous compound that dries on contact with the air. Feeding The giant African snail is a macrophytophagous herbivore; it eats a wide range of living plant material, commercially important fruits and vegetables, ornamental plants such as flowers, native plants, as well as weeds and detritus plant material. At different life stages and temperatures, the snail has slightly different feeding preferences. For example, young snails are likely to consume soil for its calcium content. Trash, cardboard, and occasionally stucco have been reportedly consumed. Under some conditions the snail will consume dead snails and other deceased animals. It can also be found consuming animal feces as a protein source, which is required for optimal growth. Lifecycle This snail is a protandric hermaphrodite; each individual has both testes and ovaries and is capable of producing both sperm and ova. The testes typically mature first around 5–8 months, followed by the ovaries. Self-fertilization has been observed and therefore snails do not require a partner to reproduce, however it is relatively rare and the resulting egg clutch is small with low viability. Typically, mating involves a simultaneous transfer of gametes to each other (bilateral sperm transfer, as compared to unilateral sperm transfer), however only the older snail with mature ovaries will produce eggs. Younger, smaller snails are more likely to initiate mating with a mate preference for larger, older snails; although larger, older snails may also mate with each other. Snails mate at night and their mating begins with courtship rituals that can last up to half an hour, including petting their heads and front parts against each other. Up to 90% of attempted courtships are rejected and do not end in copulation. Copulation can last anywhere from 1–24 hours but tends to last 6–8 hours. Transferred sperm can be stored within the body up to two years. The snails are oviparous and lay shelled eggs. The number of eggs per clutch and clutches per year varies by environment and age of the parent, but averages to around 200 eggs per clutch and 5–6 clutches per year. The eggs hatch after 8–21 days. The newly emerged neonate will consume its own shell and that of its siblings. The snail reaches adult size in about six months, after which growth slows, but does not cease until death. Life expectancy is 3–5 years in the wild and 5–6 years in captivity, but the snails can live for up to 10 years. As an invasive species In many places, this snail is a pest of agriculture and households, with the ability to transmit both human and plant pathogens. Suggested preventive measures include strict quarantine to prevent introduction and further spread. This snail has been given top national quarantine significance in the United States. In the past, quarantine officials have been able to successfully intercept and eradicate incipient invasions on the mainland USA. This snail was twice established in southeastern Florida and was successfully eradicated both times. They were brought to the U.S. through imports, intended for educational uses and to be pets. Some were also introduced because they were accidentally shipped with other cargo. An eradication effort in Florida began in 2011 when they were first sighted, and the last sighting was in 2017. In October 2021 the Florida Department of Agriculture declared the eradication a success after no further sightings in those four years. In June 2022 the snail was again found in Florida. In the wild, this species often harbors the parasitic nematode Angiostrongylus cantonensis, which can cause a very serious meningitis in humans. Human cases of this meningitis usually result from a person having eaten the raw or undercooked snail, but even handling live wild snails of this species can infect a person with the nematode, thus causing a life-threatening infection. In some regions, an effort has been made to promote use of the giant African snail as a food resource to reduce its populations. However, promoting a pest in this way is a controversial measure, because it may encourage the further deliberate spread of the snails. One particularly catastrophic attempt to biologically control this species occurred on South Pacific Islands. Colonies of A. fulica were introduced as a food reserve for the American military during World War II and they escaped. A carnivorous species (Florida rosy wolfsnail, Euglandina rosea) was later introduced by the United States government, in an attempt to control A. fulica, but the rosy wolf snail instead heavily preyed upon the native Partula snails, causing the extinction of most Partula species within a decade. The snail has been eradicated from California, U.S., Queensland, Australia, Fiji, Western Samoa, Vanuatu, and Wake Island, but these were relatively small populations. The Argentinian National Agricultural Health Service has established an ongoing project to detect, study, and prevent the expansion of this pest. In early April 2021, USCBP intercepted 22 being smuggled from Ghana into the US, along with various other prohibited quarantine items. Human health Terrestrial snails in urban environments at high densities pose a risk for human health in the form of zoonotic disease. Human-mediated transport is a major cause of the dispersal of invasive and pest snails, which are then able to survive at high densities in close proximity to people. Young children and adults are vulnerable to zoonotic disease due to an increased likelihood of direct contact with the snail (such as picking the snail up) as well as ingestion of the snail or vegetation contaminated by the snail. Direct contact with snail or ingestion of snail-contaminated food and subsequent infection can result in gastrointestinal and urinary symptoms followed by neurological symptoms. These symptoms appear if the infected snail itself is ingested but is also likely to result in muscular and sensory symptoms. Parasites Several different species and types of parasites have been known to infect Lissachatina fulica. Aelurostrongylus abstrusus, also known as "feline lungworm", is a nematode that infects cats. Angiostrongylus cantonensis, also known as "rat lungworm", is a nematode that causes eosinophilic meningoencephalitis. Infected snails have been found in South American countries including Peru, Ecuador, Venezuela, and Brazil. Human cases of this meningitis usually result from a person having eaten the raw or undercooked snail, but even handling live wild snails of this species can infect a person with the nematode, thus causing a life-threatening infection. Angiostrongylus costaricensis is a nematode that causes abdominal angiostrongyliasis. Fasciola gigantica is a flatworm that has been detected in the faeces and intestines of the snail. Hymenolepis is a tapeworm that has been detected in the faeces of the snail. Schistosoma mansoni  is a parasitic flatworm that causes intestinal schistosomiasis. Sporocysts of S. mansoni have been detected in snail faeces Strongyloides species, including Strongyloides stercoralis, are roundworms that have been detected in faeces and in mucous secretion of the snail. Trichuris is a roundworm that has been detected in the faeces of the snail. In culture These snails are used by some practitioners of Candomblé for religious purposes in Brazil as an offering to the deity Oxalá. The snails substitute for a closely related species, the West African giant snail (Archachatina marginata) normally offered in Nigeria. The two species are similar enough in appearance to satisfy religious authorities. They are also edible if cooked properly. In Taiwan, this species is used in the dish of 炒螺肉 (fried snail meat), which is a delicacy among the traditional drinking snacks. L. fulica also constitutes the predominant land snail found in Chinese markets, and larger species have potential as small, efficient livestock. The snails have also become increasingly popular as pets in some countries, where various companies have sold the animal both as a pet and an education aide. In light of social media posts where pet owners share images in close contact with the snails, a research from the University of Lausanne alerted with the risks of infections transmitted to humans. The heparinoid, acharan sulfate, is isolated from this species.
Biology and health sciences
Gastropods
Animals
3776458
https://en.wikipedia.org/wiki/Tweed
Tweed
Tweed is a rough, woollen fabric, of a soft, open, flexible texture, resembling cheviot or homespun, but more closely woven. It is usually woven with a plain weave, twill or herringbone structure. Colour effects in the yarn may be obtained by mixing dyed wool before it is spun. Tweeds are an icon of traditional Scottish, Irish, Welsh, and English clothing, being desirable for informal outerwear, due to the material being moisture-resistant and durable. Tweeds are made to withstand harsh climates and are commonly worn for outdoor activities such as shooting and hunting, in England, Wales, Ireland, and Scotland. In Ireland, tweed manufacturing is now most associated with County Donegal but originally covered the whole country. In Scotland, tweed manufacturing is most associated with the Isle of Harris in the Hebrides. Etymology The original name of the cloth was tweel, Scots for twill, the material being woven in a twilled rather than a plain pattern. A traditional story has the name coming about almost by chance. Around 1831, a London merchant, James Locke, received a letter from a Hawick firm, Wm. Watson & Sons, Dangerfield Mills about some "tweels". The merchant misinterpreted the handwriting, understanding it to be a trade-name taken from the River Tweed that flows through the Scottish Borders textile area. The goods were subsequently advertised as Tweed and the name has remained since. Traditions and culture Traditionally used for upper-class country clothing such as shooting jackets, tweed became popular among the Edwardian middle classes who associated it with the leisurely pursuits of the elite. Due to their durability tweed Norfolk jackets and plus-fours were a popular choice for hunters, cyclists, golfers, and early motorists, hence Kenneth Grahame's depiction of Mr. Toad in a Harris Tweed suit. Popular patterns include houndstooth, associated with 1960s fashion, windowpane, gamekeeper's tweed worn by academics, Glen plaid check, originally commissioned by Edward VII, and herringbone. During the 2000s and 2010s, members of long-established British and American land-owning families started to wear high-quality heirloom tweed inherited from their grandparents, some of which pre-dated the Second World War. In modern times, cyclists may wear tweed when they ride vintage bicycles on a Tweed Run. This practice has its roots in the British young fogey and hipster subcultures of the late 2000s and early 2010s, whose adherents appreciate both vintage tweed, and bicycles. Musical instruments Some vintage Danemann upright pianos have a tweed cloth backing to protect the internal mechanism. Occasionally, Scottish bagpipes were covered in tweed as an alternative to tartan wool. The term "tweed" is used to describe coverings on instrument cables and vintage or retro guitar amplifiers, such as the Fender tweed and Fender Tweed Deluxe. Despite the terminology, many of these coverings were not considered tweed but cotton twill due to the cover's design, which caused this misidentification of the design. Types of tweed Harris Tweed: A handwoven tweed, defined in the Harris Tweed Act 1993 as cloth that is "Handwoven by the islanders at their homes in the Outer Hebrides, finished in the Outer Hebrides, and made from pure virgin wool dyed and spun in the Outer Hebrides". Donegal tweed: A handwoven tweed which has been manufactured for several centuries in County Donegal, Ireland, using wool from locally-bred sheep and dye from indigenous plants such as blackberries, gorse (whins), and moss. Silk tweed: A fabric made of raw silk with flecks of colour typical of woollen tweed. Saxony tweed: Originated in Saxony, Germany. It is a fabric made from the wool of merino sheep. It is very smooth and soft. Gallery
Technology
Fabrics and fibers
null
22772421
https://en.wikipedia.org/wiki/Multi-factor%20authentication
Multi-factor authentication
Multi-factor authentication (MFA; two-factor authentication, or 2FA, along with similar terms) is an electronic authentication method in which a user is granted access to a website or application only after successfully presenting two or more pieces of evidence (or factors) to an authentication mechanism. MFA protects personal data—which may include personal identification or financial assets—from being accessed by an unauthorized third party that may have been able to discover, for example, a single password. Usage of MFA has increased in recent years, however, there are numerous threats that consistently makes it hard to ensure MFA is entirely secure. Factors Authentication takes place when someone tries to log into a computer resource (such as a computer network, device, or application). The resource requires the user to supply the identity by which the user is known to the resource, along with evidence of the authenticity of the user's claim to that identity. Simple authentication requires only one such piece of evidence (factor), typically a password. For additional security, the resource may require more than one factor—multi-factor authentication, or two-factor authentication in cases where exactly two pieces of evidence are to be supplied. The use of multiple authentication factors to prove one's identity is based on the premise that an unauthorized actor is unlikely to be able to supply the factors required for access. If, in an authentication attempt, at least one of the components is missing or supplied incorrectly, the user's identity is not established with sufficient certainty and access to the asset (e.g., a building, or data) being protected by multi-factor authentication then remains blocked. The authentication factors of a multi-factor authentication scheme may include: Something the user has: Any physical object in the possession of the user, such as a security token (USB stick), a bank card, a key, etc. Something the user knows: Certain knowledge only known to the user, such as a password, PIN, PUK, etc. Something the user is: Some physical characteristic of the user (biometrics), such as a fingerprint, eye iris, voice, typing speed, pattern in key press intervals, etc. An example of two-factor authentication is the withdrawing of money from an ATM; only the correct combination of a bank card (something the user possesses) and a PIN (something the user knows) allows the transaction to be carried out. Two other examples are to supplement a user-controlled password with a one-time password (OTP) or code generated or received by an authenticator (e.g. a security token or smartphone) that only the user possesses. A third-party authenticator app enables two-factor authentication in a different way, usually by showing a randomly generated and constantly refreshing code which the user can use, rather than sending an SMS or using another method. Knowledge Knowledge factors are a form of authentication. In this form, the user is required to prove knowledge of a secret in order to authenticate. A password is a secret word or string of characters that is used for user authentication. This is the most commonly used mechanism of authentication. Many multi-factor authentication techniques rely on passwords as one factor of authentication. Variations include both longer ones formed from multiple words (a passphrase) and the shorter, purely numeric, PIN commonly used for ATM access. Traditionally, passwords are expected to be memorized, but can also be written down on a hidden paper or text file. Possession Possession factors ("something only the user has") have been used for authentication for centuries, in the form of a key to a lock. The basic principle is that the key embodies a secret that is shared between the lock and the key, and the same principle underlies possession factor authentication in computer systems. A security token is an example of a possession factor. Disconnected tokens have no connections to the client computer. They typically use a built-in screen to display the generated authentication data, which is manually typed in by the user. This type of token mostly uses a OTP that can only be used for that specific session. Connected tokens are devices that are physically connected to the computer to be used. Those devices transmit data automatically. There are a number of different types, including USB tokens, smart cards and wireless tags. Increasingly, FIDO2 capable tokens, supported by the FIDO Alliance and the World Wide Web Consortium (W3C), have become popular with mainstream browser support beginning in 2015. A software token (a.k.a. soft token) is a type of two-factor authentication security device that may be used to authorize the use of computer services. Software tokens are stored on a general-purpose electronic device such as a desktop computer, laptop, PDA, or mobile phone and can be duplicated. (Contrast hardware tokens, where the credentials are stored on a dedicated hardware device and therefore cannot be duplicated, absent physical invasion of the device). A soft token may not be a device the user interacts with. Typically an X.509v3 certificate is loaded onto the device and stored securely to serve this purpose. Multi-factor authentication can also be applied in physical security systems. These physical security systems are known and commonly referred to as access control. Multi-factor authentication is typically deployed in access control systems through the use, firstly, of a physical possession (such as a fob, keycard, or QR-code displayed on a device) which acts as the identification credential, and secondly, a validation of one's identity such as facial biometrics or retinal scan. This form of multi-factor authentication is commonly referred to as facial verification or facial authentication. Inherent These are factors associated with the user, and are usually biometric methods, including fingerprint, face, voice, or iris recognition. Behavioral biometrics such as keystroke dynamics can also be used. Location Increasingly, a fourth factor is coming into play involving the physical location of the user. While hard wired to the corporate network, a user could be allowed to login using only a pin code. Whereas if the user was off the network or working remotely, a more secure MFA method such as entering a code from a soft token as well could be required. Adapting the type of MFA method and frequency to a users' location will enable you to avoid risks common to remote working. Systems for network admission control work in similar ways where the level of network access can be contingent on the specific network a device is connected to, such as Wi-Fi vs wired connectivity. This also allows a user to move between offices and dynamically receive in each. Mobile phone-based authentication Two-factor authentication over text message was developed as early as 1996, when AT&T described a system for authorizing transactions based on an exchange of codes over two-way pagers. Many multi-factor authentication vendors offer mobile phone-based authentication. Some methods include push-based authentication, QR code-based authentication, one-time password authentication (event-based and time-based), and SMS-based verification. SMS-based verification suffers from some security concerns. Phones can be cloned, apps can run on several phones and cell-phone maintenance personnel can read SMS texts. Not least, cell phones can be compromised in general, meaning the phone is no longer something only the user has. The major drawback of authentication including something the user possesses is that the user must carry around the physical token (the USB stick, the bank card, the key or similar), practically at all times. Loss and theft are risks. Many organizations forbid carrying USB and electronic devices in or out of premises owing to malware and data theft risks, and most important machines do not have USB ports for the same reason. Physical tokens usually do not scale, typically requiring a new token for each new account and system. Procuring and subsequently replacing tokens of this kind involves costs. In addition, there are inherent conflicts and unavoidable trade-offs between usability and security. Two-step authentication involving mobile phones and smartphones provides an alternative to dedicated physical devices. To authenticate, people can use their personal access codes to the device (i.e. something that only the individual user knows) plus a one-time-valid, dynamic passcode, typically consisting of 4 to 6 digits. The passcode can be sent to their mobile device by SMS or can be generated by a one-time passcode-generator app. In both cases, the advantage of using a mobile phone is that there is no need for an additional dedicated token, as users tend to carry their mobile devices around at all times. Notwithstanding the popularity of SMS verification, security advocates have publicly criticized SMS verification, and in July 2016, a United States NIST draft guideline proposed deprecating it as a form of authentication. A year later NIST reinstated SMS verification as a valid authentication channel in the finalized guideline. In 2016 and 2017 respectively, both Google and Apple started offering user two-step authentication with push notifications as an alternative method. Security of mobile-delivered security tokens fully depends on the mobile operator's operational security and can be easily breached by wiretapping or SIM cloning by national security agencies. Advantages: No additional tokens are necessary because it uses mobile devices that are (usually) carried all the time. As they are constantly changed, dynamically generated passcodes are safer to use than fixed (static) log-in information. Depending on the solution, passcodes that have been used are automatically replaced in order to ensure that a valid code is always available, transmission/reception problems do not, therefore, prevent logins. Disadvantages: Users may still be susceptible to phishing attacks. An attacker can send a text message that links to a spoofed website that looks identical to the actual website. The attacker can then get the authentication code, user name and password. A mobile phone is not always available—it can be lost, stolen, have a dead battery, or otherwise not work. Despite their growing popularity, some users may not even own a mobile device, and take umbrage at being required to own one as a condition of using some service on their home PC. Mobile phone reception is not always available—large areas, particularly outside of towns, lack coverage. SIM cloning gives hackers access to mobile phone connections. Social-engineering attacks against mobile-operator companies have resulted in the handing over of duplicate SIM cards to criminals. Text messages to mobile phones using SMS are insecure and can be intercepted by IMSI-catchers. Thus third parties can steal and use the token. Account recovery typically bypasses mobile-phone two-factor authentication. Modern smartphones are used both for receiving email and SMS. So if the phone is lost or stolen and is not protected by a password or biometric, all accounts for which the email is the key can be hacked as the phone can receive the second factor. Mobile carriers may charge the user messaging fees. Legislation and regulation The Payment Card Industry (PCI) Data Security Standard, requirement 8.3, requires the use of MFA for all remote network access that originates from outside the network to a Card Data Environment (CDE). Beginning with PCI-DSS version 3.2, the use of MFA is required for all administrative access to the CDE, even if the user is within a trusted network. European Union The second Payment Services Directive requires "strong customer authentication" on most electronic payments in the European Economic Area since September 14, 2019. India In India, the Reserve Bank of India mandated two-factor authentication for all online transactions made using a debit or credit card using either a password or a one-time password sent over SMS. This requirement was removed in 2016 for transactions up to ₹2,000 after opting-in with the issuing bank. Vendors such as Uber have been mandated by the bank to amend their payment processing systems in compliance with this two-factor authentication rollout. United States Details for authentication for federal employees and contractors in the U.S. are defined in Homeland Security Presidential Directive 12 (HSPD-12). IT regulatory standards for access to federal government systems require the use of multi-factor authentication to access sensitive IT resources, for example when logging on to network devices to perform administrative tasks and when accessing any computer using a privileged login. NIST Special Publication 800-63-3 discusses various forms of two-factor authentication and provides guidance on using them in business processes requiring different levels of assurance. In 2005, the United States' Federal Financial Institutions Examination Council issued guidance for financial institutions recommending financial institutions conduct risk-based assessments, evaluate customer awareness programs, and develop security measures to reliably authenticate customers remotely accessing online financial services, officially recommending the use of authentication methods that depend on more than one factor (specifically, what a user knows, has, and is) to determine the user's identity. In response to the publication, numerous authentication vendors began improperly promoting challenge-questions, secret images, and other knowledge-based methods as "multi-factor" authentication. Due to the resulting confusion and widespread adoption of such methods, on August 15, 2006, the FFIEC published supplemental guidelineswhich state that by definition, a "true" multi-factor authentication system must use distinct instances of the three factors of authentication it had defined, and not just use multiple instances of a single factor. Security According to proponents, multi-factor authentication could drastically reduce the incidence of online identity theft and other online fraud, because the victim's password would no longer be enough to give a thief permanent access to their information. However, many multi-factor authentication approaches remain vulnerable to phishing, man-in-the-browser, and man-in-the-middle attacks. Two-factor authentication in web applications are especially susceptible to phishing attacks, particularly in SMS and e-mails, and, as a response, many experts advise users not to share their verification codes with anyone, and many web application providers will place an advisory in an e-mail or SMS containing a code. Multi-factor authentication may be ineffective against modern threats, like ATM skimming, phishing, and malware. In May 2017, O2 Telefónica, a German mobile service provider, confirmed that cybercriminals had exploited SS7 vulnerabilities to bypass SMS based two-step authentication to do unauthorized withdrawals from users' bank accounts. The criminals first infected the account holder's computers in an attempt to steal their bank account credentials and phone numbers. Then the attackers purchased access to a fake telecom provider and set up a redirect for the victim's phone number to a handset controlled by them. Finally, the attackers logged into victims' online bank accounts and requested for the money on the accounts to be withdrawn to accounts owned by the criminals. SMS passcodes were routed to phone numbers controlled by the attackers and the criminals transferred the money out. MFA fatigue An increasingly common approach to defeating MFA is to bombard the user with many requests to accept a log-in, until the user eventually succumbs to the volume of requests and accepts one. Implementation Many multi-factor authentication products require users to deploy client software to make multi-factor authentication systems work. Some vendors have created separate installation packages for network login, Web access credentials, and VPN connection credentials. For such products, there may be four or five different software packages to push down to the client PC in order to make use of the token or smart card. This translates to four or five packages on which version control has to be performed, and four or five packages to check for conflicts with business applications. If access can be operated using web pages, it is possible to limit the overheads outlined above to a single application. With other multi-factor authentication technology such as hardware token products, no software must be installed by end-users. There are drawbacks to multi-factor authentication that are keeping many approaches from becoming widespread. Some users have difficulty keeping track of a hardware token or USB plug. Many users do not have the technical skills needed to install a client-side software certificate by themselves. Generally, multi-factor solutions require additional investment for implementation and costs for maintenance. Most hardware token-based systems are proprietary, and some vendors charge an annual fee per user. Deployment of hardware tokens is logistically challenging. Hardware tokens may get damaged or lost, and issuance of tokens in large industries such as banking or even within large enterprises needs to be managed. In addition to deployment costs, multi-factor authentication often carries significant additional support costs. A 2008 survey of over 120 U.S. credit unions by the Credit Union Journal reported on the support costs associated with two-factor authentication. In their report, were reported to have the highest support costs. Research into deployments of multi-factor authentication schemes has shown that one of the elements that tend to impact the adoption of such systems is the line of business of the organization that deploys the multi-factor authentication system. Examples cited include the U.S. government, which employs an elaborate system of physical tokens (which themselves are backed by robust Public Key Infrastructure), as well as private banks, which tend to prefer multi-factor authentication schemes for their customers that involve more accessible, less expensive means of identity verification, such as an app installed onto a customer-owned smartphone. Despite the variations that exist among available systems that organizations may have to choose from, once a multi-factor authentication system is deployed within an organization, it tends to remain in place, as users invariably acclimate to the presence and use of the system and embrace it over time as a normalized element of their daily process of interaction with their relevant information system. While the perception is that multi-factor authentication is within the realm of perfect security, Roger Grimes writes that if not properly implemented and configured, multi-factor authentication can in fact be easily defeated. Patents In 2013, Kim Dotcom claimed to have invented two-factor authentication in a 2000 patent, and briefly threatened to sue all the major web services. However, the European Patent Office revoked his patent in light of an earlier 1998 U.S. patent held by AT&T.
Technology
Computer security
null
25706228
https://en.wikipedia.org/wiki/Paleocontinent
Paleocontinent
A paleocontinent or palaeocontinent is a distinct area of continental crust that existed as a major landmass in the geological past. There have been many different landmasses throughout Earth's time. They range in sizes; some are just a collection of small microcontinents while others are large conglomerates of crust. As time progresses and sea levels rise and fall more crust can be exposed making way for larger landmasses. The continents of the past shaped the evolution of organisms on Earth and contributed to the climate of the globe as well. As landmasses break apart, species are separated and those that were once the same now have evolved to their new climate. The constant movement of these landmasses greatly determines the distribution of organisms on Earth's surface. This is evident with how similar fossils are found on completely separate continents. Also, as continents move, mountain building events (orogenies) occur, causing a shift in the global climate as new rock is exposed and then there is more exposed rock at higher elevations. This causes glacial ice expansion and an overall cooler global climate. The movement of the continents greatly affects the overall dispersal of organisms throughout the world and the trend in climate throughout Earth's history. Examples include Laurentia, Baltica and Avalonia, which collided together during the Caledonian orogeny to form the Old Red Sandstone paleocontinent of Laurussia. Another example includes a collision that occurred during the late Pennsylvanian and early Permian time when there was a collision between the two continents of Tarimsky and Kirghiz-Kazakh. This collision was caused because of their askew convergence when the paleoceanic basin closed. Examples The examples below are condensed in order to portray a brief overview of several paleocontinents. Gondwana Location Gondwana was located in the southern hemisphere, with the land mass that makes up current day Antarctica closest to the South Pole. The continent reaches from just above the equator to the South Pole. Current day South America and Africa are closest to the equator with Northern Africa intersecting the equator. Time period 600-180 mya, Precambrian - Jurassic Period. Formation Gondwana was made of present-day South America, Africa, Arabia, India, Antarctica, Australia, and Madagascar. The Continent was fully formed by the late Precambrian period. This was 600 million years ago. It was an amalgamation of all the current southern hemisphere continents. Gondwana lasted through many different time periods and was a part of other super continents, like Pangea. Demise Gondwana broke up in distinct stages. The continent started to split during the Jurassic Period around 180 million years ago. The first event was the separation of the western half of Gondwana, which includes Africa and South America, from the eastern half, which includes Antarctica, Australia, Madagascar, and India. Next, 40 million years later, South America and Africa began to split which began to open up the Atlantic Ocean. Also around this time India and Madagascar began to detach from Australia and Antarctica. This separation created the Indian Ocean. Lastly, in the Cretaceous, India and Madagascar began to split and Australia and Antarctica began to detach from one another. Life The life on Gondwana has changed throughout its existence. Gondwana was a smaller piece of Rodinia and stayed together all the way through the breakup of Pangea. This allowed Gondwana to host almost all species that have ever lived on Earth. Gondwana also was a part of some great mass extinction events. During the Ordovician, sea levels rose so much that the entire Gondwana continent was covered, at this time marine life was dominant. Also, vertebrates started to make an appearance in the fossil record. Terrestrial species started to become more prominent in the Silurian, however, in the Devonian modern fish and shark species began to diversify, and terrestrial vegetation begun colonizing the continent, as organic soil accumulation can be detected. Amniotic eggs started to evolve as more terrestrial land became available with rising land masses and lowering sea levels. During the Permian extinction, almost all marine species were lost, along with some terrestrial species. This event gave rise to terrestrial species, such as reptiles, dinosaurs, and small mammals. Climate Gondwana experienced a variety of climates as it has been a land mass from 600 million years ago in the Precambrian to the Early Jurassic with the breakup of Pangea. In the Cambrian, there was a warmer and milder climate because most continental crust was closer to the equator and not the poles. The continent endured an ice age during the Ordovician period and deglaciation was still occurring during the Silurian period. The climate started to become more humid and tropical throughout the globe and there was a lack of seasonality. The climate began to change again during the Mesozoic, this time period was dominated by a very large and lengthy monsoon season, because of Pangea. Once Pangea began to break apart the climate started to cool, but Gondwana was already being broken apart. Laurentia Location The location of Laurentia has changed throughout time. In the late Proterozoic Laurentia could be found surrounded by Siberia, South Africa, Australia-Antarctica, and Amazonia-Baltica. During the time of supercontinent Gondwana, Laurentia was smashed in between Eastern and Western Gondwana, but when Gondwana attached to Laurussia to form Pangea, Laurentia moved and was closer to northern Africa. Time period 4 bya-present day, Precambrian-Quaternary. Formation Laurentia is the North American craton. It is one of the largest and oldest cratons dating back to Precambrian times. The craton itself includes the Canadian and Greenland shields, as well as the interior basin of North America, and the craton can also include the Cordilleran foreland of the Southwestern United States. The craton itself formed in deep time, the early Proterozoic age of the Earth and has stayed coherent since. It formed through many different orogenies and the suture zones that they create. These smaller land masses were made of Archean age crust and belts of Early Proterozoic island arcs. Laurentia has been a part of many supercontinents throughout its time. The formation of Laurentia is similar to the formation of Eurasia. Demise Laurentia is presently still coherent and still a continental craton. Now, it goes by the name North America. The craton can be found stretching from Alberta, Canada to the Eastern coast of both Canada and the United States. The craton stretches from the south eastern United States to Greenland. The western border of Laurentia can be found on the eastern side of the Rocky Mountains. Life Sea level rose in the Cambrian period which gave rise to marine invertebrates which flourished with the rise in sea level. Life in the Ordovician continued to be dominated by marine animals and vegetation. Also, vertebrates started to make up a portion of the animals on Earth. However sponges and algae were still the most dominant species type. Marine animals were the most dominant but terrestrial species started to appear at the end of the Ordovician. Life in the Silurian was still dominated by marine species but terrestrial species are much more prominent than they had been previously. When Laurentia moved into the Devonian period fish began to diversify and life had begun colonizing land as this is when organic soil accumulation can be detected. More modern fish began to develop as time went on, with the addition of shark diversification. Also, amniotic eggs started to evolve as more terrestrial land became available with rising land masses. The next event was the Permian extinction where almost all species in the oceans died off along with many terrestrial species. This then gave rise to terrestrial animals, such as reptiles, dinosaurs, and small mammals. At the end of this new era was a mass extinction of Dinosaurs and reptiles, this led mammals to flourish as they could take over many of the niches that became vacant. Climate Laurentia experienced a variety of climates as it has been a land mass for billions of years. The craton experienced an ice age during the late Proterozoic and another during the Ordivician period. During the Cambrian, there was no ice age and it was slightly warmer as most continents avoided the poles giving land at this time a milder climate. Deglaciation was still occurring during the Silurian period after the ice age of the Ordovician. The climate started to move to become more humid and tropical throughout the Earth. There were not many seasons. The climate began to change when Laurentia entered the Mesozoic Era, this time period was dominated by a very large and lengthy monsoon season, because of Pangea. At the end of the Cretaceous, seasons started to return and the Earth entered another ice age type event. Pangea Location The continent spanned from 85° N to 90° S. Pangea was centered over the equator, and encompassed area from the North to the South poles. The Southeastern part of present-day North America and the Northern region of current day Africa intersected the equator. Present-day Eastern Asia was furthest North and Antarctica and Australia were furthest South. Time period 299–272 mya to 200 mya, Early Permian-Early Jurassic. Formation Pangea was created by the continent of Gondwanaland and the continent of Laurussia. During the Carboniferous period the two continents came together to form the supercontinent of Pangea. The mountain building events that happened at this time created the Appalachian Mountains and the Variscan Belt of Central Europe. However, not all landmasses on Earth had attached themselves onto Pangea. It took until the late Permian until the Siberian land mass collided with Pangea. The only land mass to not be a part of Pangea were the former North and South China plates, they created a much smaller land mass in the ocean. There was a massive ocean that encompassed the world called Panthalassa, because most of the continental crust was sutured together into one giant continent there was a giant ocean to match. Demise Pangea broke apart after 70 million years. The supercontinent was torn apart through fragmentation, which is where parts of the main landmass would break off in stages. There were two main events that led to the dispersal of Pangea. The first was a passive rifting event that occurred in the Triassic period. This rifting event caused the Atlantic Ocean to form. The other event was an active rifting event. This happened in the Lower Jurassic and caused the opening of the Indian Ocean. This breakup took 17 million years to complete. Life Pangea formed roughly 20 million years before the Permian Extinction. During the Permian Extinction over 95% of all marine species were lost and 70% of terrestrial species were lost. The Triassic period of Pangea became a time of recovery from the Permian Extinction. This recovery included the rise of sea levels which created extensive shallow oceanic shelves for large marine reptiles. This recovery period was when terrestrial animals flourished and when land reptiles diversified and flourished, along with the appearance of Dinosaurs. These Dinosaurs would become what characterizes the life forms of the following periods, the Jurassic and Cretaceous. Lastly at the end of the Triassic and the beginning of the Jurassic was the first appearance of small shrew like mammals that came from reptiles. Climate The main characteristic of Pangea's climate is that its position on Earth was advantageous for starting a cycle of megamonsoonal circulation. The monsoons reached their maximum strength in the Triassic period of the Mesozoic. During the late Carboniferous, there was peat formation in what is currently Europe and the Eastern areas of North America. The wetter, swamp like conditions needed to form peat were contrasted with the dry conditions on the Colorado Plateau. Nearing the end of the Carboniferous the region of Pangea centered on the equator became drier. In the Permian, this dryness was contrasted with seasonal rainfall, and this type of climate became more normal and widespread on the continent. However, during the Triassic, the Colorado Plateau started to regain some moisture and there was a shift in wind direction. Around the same time parts of current day Australia that were found at higher latitudes were much drier and seasonal in character. At the start of the Jurassic the megamonsoon started to fall apart as drying started to happen to Gondwana and the southern portion of Laurasia. Rodinia Location Rodinia was centered on the Equator and reached from 60° N to 60° S. Time period 1.2-1 bya to 800-850 mya, Proterozoic Eon - end of Precambrian. Formation It was the first supercontinent to form on Earth, all the continental crust on Earth came together and formed one giant land mass. This land mass was surrounded by an even larger ocean, known as Mirovia. There were about four smaller continents that collided and came together to form Rodinia. This event is called the Grenville Orogeny. This caused there to be mountain building along the areas of were continents collided. This is because the continental crust is not very dense so neither continent would sink or sub duct. This causes the formation of Fold and Thrust belts, similar to the Himalayas today. Demise Rodinia lasted for 250 mya and then began to come apart between 850 and 800 mya. The continent began to break part at a single point but then fractured and ripped open in three different directions. Two of the three rifts that were created were successful and the third failed. The breaking up Rodinia caused the formation of Gondwana or Gondwanaland and Laurentia. Rodinia's breakup created many shallow coastal shelves that were not there before. The shelves were nutrient rich and this is thought to have led to the diversification of vegetative and non-vegetative life on Earth. The shelves in particular were the area where animal life is said to have started. The name Rodinia also alludes to this, in Russian it means ‘to give birth’ and in this case that is to animal life here on Earth. Climate The climate at the end of Rodinia's existence was cold and it is thought that this was the start of the first snowball Earth period. Rodinia already had some glaciation but as it tore apart, less dense rock began to rise causing more land area to be at higher elevations which encouraged more ice to stick. However, the time of Rodinia was a time of inactivity in Earth's atmosphere. Also the atmosphere had little oxygen because Rodinia's land surface was too harsh of an environment for land plants to flourish, the atmosphere was devoid of Oxygen and the ozone layer was much less extensive which attributed to the harsh land environment.
Physical sciences
Paleogeography
Earth science
5116788
https://en.wikipedia.org/wiki/Hayashi%20track
Hayashi track
The Hayashi track is a luminosity–temperature relationship obeyed by infant stars of less than in the pre-main-sequence phase (PMS phase) of stellar evolution. It is named after Japanese astrophysicist Chushiro Hayashi. On the Hertzsprung–Russell diagram, which plots luminosity against temperature, the track is a nearly vertical curve. After a protostar ends its phase of rapid contraction and becomes a T Tauri star, it is extremely luminous. The star continues to contract, but much more slowly. While slowly contracting, the star follows the Hayashi track downwards, becoming several times less luminous but staying at roughly the same surface temperature, until either a radiative zone develops, at which point the star starts following the Henyey track, or nuclear fusion begins, marking its entry onto the main sequence. The shape and position of the Hayashi track on the Hertzsprung–Russell diagram depends on the star's mass and chemical composition. For solar-mass stars, the track lies at a temperature of roughly 4000 K. Stars on the track are nearly fully convective and have their opacity dominated by hydrogen ions. Stars less than are fully convective even on the main sequence, but their opacity begins to be dominated by Kramers' opacity law after nuclear fusion begins, thus moving them off the Hayashi track. Stars between 0.5 and develop a radiative zone prior to reaching the main sequence. Stars between 3 and are fully radiative at the beginning of the pre-main-sequence. Even heavier stars are born onto the main sequence, with no PMS evolution. At the end of a low- or intermediate-mass star's life, the star follows an analogue of the Hayashi track, but in reverse—it increases in luminosity, expands, and stays at roughly the same temperature, eventually becoming a red giant. History In 1961, Professor Chushiro Hayashi published two papers that led to the concept of the pre-main-sequence and form the basis of the modern understanding of early stellar evolution. Hayashi realized that the existing model, in which stars are assumed to be in radiative equilibrium with no substantial convection zone, cannot explain the shape of the red-giant branch. He therefore replaced the model by including the effects of thick convection zones on a star's interior. A few years prior, Osterbrock proposed deep convection zones with efficient convection, analyzing them using the opacity of H− ions (the dominant opacity source in cool atmospheres) in temperatures below 5000 K. However, the earliest numerical models of Sun-like stars did not follow up on this work and continued to assume radiative equilibrium. In his 1961 papers, Hayashi showed that the convective envelope of a star is determined by where is unitless, and not the energy. Modelling stars as polytropes with index 3/2 (in other words, assuming they follow a pressure-density relationship of ), he found that is the maximum for a quasistatic star. If a star is not contracting rapidly, defines a curve on the HR diagram, to the right of which the star cannot exist. He then computed the evolutionary tracks and isochrones (luminosity–temperature distributions of stars at a given age) for a variety of stellar masses and noted that NGC2264, a very young star cluster, fits the isochrones well. In particular, he calculated much lower ages for solar-type stars in NGC2264 and predicted that these stars were rapidly contracting T Tauri stars. In 1962, Hayashi published a 183-page review of stellar evolution, discussing the evolution of stars born in the forbidden region. These stars rapidly contract due to gravity before settling to a quasistatic, fully convective state on the Hayashi tracks. In 1965, numerical models by Iben and Ezer & Cameron realistically simulated pre-main-sequence evolution, including the Henyey track that stars follow after leaving the Hayashi track. These standard PMS tracks can still be found in textbooks on stellar evolution. Forbidden zone The forbidden zone is the region on the HR diagram to the right of the Hayashi track where no star can be in hydrostatic equilibrium, even those that are partially or fully radiative. Newborn protostars start out in this zone, but are not in hydrostatic equilibrium and will rapidly move towards the Hayashi track. Because stars emit light via black-body radiation, the power per unit surface area they emit is given by the Stefan–Boltzmann law: The star's luminosity is therefore given by For a given , a lower temperature implies a larger radius, and vice versa. Thus, the Hayashi track separates the HR diagram into two regions: the allowed region to the left, with high temperatures and smaller radii for each luminosity, and the forbidden region to the right, with lower temperatures and correspondingly larger radii. The Hayashi limit can refer to either the lower bound in temperature or the upper bound on radius defined by the Hayashi track. The region to the right is forbidden because it can be shown that a star in the region must have a temperature gradient of where for a monatomic ideal gas undergoing adiabatic expansion or contraction. A temperature gradient greater than 0.4 is therefore called superadiabatic. Consider a star with a superadiabatic gradient. Imagine a parcel of gas that starts at radial position , but moves upwards to in a sufficiently short time that it exchanges negligible heat with its surroundings—in other words, the process is adiabatic. The pressure of the surroundings, as well as that of the parcel, decreases by some amount . The parcel's temperature changes by . The temperature of the surroundings also decreases, but by some amount that is greater than . The parcel therefore ends up being hotter than its surroundings. Since the ideal gas law can be written , a higher temperature implies a lower density at the same pressure. The parcel is therefore also less dense than its surroundings. This will cause it to rise even more, and the parcel will become even less dense than its new surroundings. Clearly, this situation is not stable. In fact, a superadiabatic gradient causes convection. Convection tends to lower the temperature gradient because the rising parcel of gas will eventually be dispersed, dumping its excess thermal and kinetic energy into its surroundings and heating up said surroundings. In stars, the convection process is known to be highly efficient, with a typical that only exceeds the adiabatic gradient by 1 part in 10 million. If a star is placed in the forbidden zone, with a temperature gradient much greater than 0.4, it will experience rapid convection that brings the gradient down. Since this convection will drastically change the star's pressure and temperature distribution, the star is not in hydrostatic equilibrium, and will contract until it is. A star far to the left of the Hayashi track has a temperature gradient smaller than adiabatic. This means that if a parcel of gas rises a tiny bit, it will be more dense than its surroundings and sink back to where it came from. Convection therefore does not occur, and almost all energy output is carried radiatively. Star formation Stars form when small regions of a giant molecular cloud collapse under their own gravity, becoming protostars. The collapse releases gravitational energy, which heats up the protostar. This process occurs on the free fall timescale, which is roughly 100,000 years for solar-mass protostars, and ends when the protostar reaches approximately 4000 K. This is known as the Hayashi boundary, and at this point, the protostar is on the Hayashi track. At this point, they are known as T Tauri stars and continue to contract, but much more slowly. As they contract, they decrease in luminosity because less surface area becomes available for emitting light. The Hayashi track gives the resulting change in temperature, which will be minimal compared to the change in luminosity because the Hayashi track is nearly vertical. In other words, on the HR diagram, a T Tauri star starts out on the Hayashi track with a high luminosity and moves downward along the track as time passes. The Hayashi track describes a fully convective star. This is a good approximation for very young pre-main-sequence stars because they are still cool and highly opaque, so that radiative transport is insufficient to carry away the generated energy and convection must occur. Stars less massive than remain fully convective, and therefore remain on the Hayashi track, throughout their pre-main-sequence stage, joining the main sequence at the bottom of the Hayashi track. Stars heavier than have higher interior temperatures, which decreases their central opacity and allows radiation to carry away large amounts of energy. This allows a radiative zone to develop around the star's core. The star is then no longer on the Hayashi track, and experiences a period of rapidly increasing temperature at nearly constant luminosity. This is called the Henyey track, and ends when temperatures are high enough to ignite hydrogen fusion in the core. The star is then on the main sequence. Lower-mass stars follow the Hayashi track until the track intersects with the main sequence, at which point hydrogen fusion begins and the star follows the main sequence. Even lower-mass 'stars' never achieve the conditions necessary to fuse hydrogen and become brown dwarfs. Derivation The exact shape and position of the Hayashi track can only be computed numerically using computer models. Nevertheless, we can make an extremely crude analytical argument that captures most of the track's properties. The following derivation loosely follows that of Kippenhahn, Weigert, and Weiss in Stellar Structure and Evolution. In our simple model, a star is assumed to consist of a fully convective interior inside of a fully radiative atmosphere. The convective interior is assumed to be an ideal monatomic gas with a perfectly adiabatic temperature gradient: This quantity is sometimes labelled . The following adiabatic equation therefore holds true for the entire interior: where is the adiabatic gamma, which is 5/3 for an ideal monatomic gas. The ideal gas law says: where is the molecular weight per particle, and H is (to a very good approximation) the mass of a hydrogen atom. This equation represents a polytrope of index 1.5, since a polytrope is defined by , where is the polytropic index. Applying the equation to the center of the star gives We can solve for C: But for any polytrope, and . are all constants independent of pressure and density, and the average density is defined as Plugging this 2 equations into the equation for , we have where all multiplicative constants have been ignored. Recall that our original definition of was Therefore, for any star of mass M and radius R, we have We need another relationship between , , , and in order to eliminate . This relationship will come from the atmosphere model. The atmosphere is assumed to be thin, with average opacity . Opacity is defined to be optical depth divided by density. Thus, by definition, the optical depth of the stellar surface, also called the photosphere, is where is the stellar radius, also known as the position of the photosphere. The pressure at the surface is The optical depth at the photosphere turns out to be . By definition, the temperature of the photosphere is , where effective temperature is given by . Therefore, the pressure is We can approximate the opacity to be where , . Plugging this into the pressure equation, we get Finally, we need to eliminate and introduce , the luminosity. This can be done with the equation Equations and can now be combined by setting and in equation 1, then eliminating . can be eliminated using equation . After some algebra and setting , we get where In cool stellar atmospheres (), like those of newborn stars, the dominant source of opacity is the H− ion, for which and , we get and . Since A is much smaller than 1, the Hayashi track is extremely steep: if the luminosity changes by a factor of 2, the temperature only changes by 4%. The fact that B is positive indicates that the Hayashi track shifts left on the HR diagram, towards higher temperatures, as mass increases. Although this model is extremely crude, these qualitative observations are fully supported by numerical simulations. At high temperatures, the atmosphere's opacity begins to be dominated by Kramers' opacity law instead of the H− ion, with and In that case, in our crude model, far higher than 0.05, and the star is no longer on the Hayashi track. In Stellar Interiors, Hansen, Kawaler, and Trimble go through a similar derivation without neglecting multiplicative constants and arrived at where is the molecular weight per particle. The authors note that the coefficient of 2600 K is too low—it should be around 4000 K—but this equation nevertheless shows that temperature is nearly independent of luminosity. Numerical results The diagram at the top of this article shows numerically computed stellar evolution tracks for various masses. The vertical portions of each track is the Hayashi track. The endpoints of each track lie on the main sequence. The horizontal segments for higher-mass stars show the Henyey track. It is approximately true that The diagram to the right shows how Hayashi tracks change with changes in chemical composition. is the star's metallicity, the mass fraction not accounted for by hydrogen or helium. For any given hydrogen mass fraction, increasing leads to increasing molecular weight. The dependence of temperature on molecular weight is extremely steep—it is approximately Decreasing by a factor of 10 shifts the track right, changing by about 0.05. Chemical composition affects the Hayashi track in a few ways. The track depends strongly on the atmosphere's opacity, and this opacity is dominated by the H− ion. The abundance of the H− ion is proportional to the density of free electrons, which, in turn, is higher if there are more metals because metals are easier to ionize than hydrogen or helium. Observational status Observational evidence of the Hayashi track comes from color–magnitude plots—the observational equivalent of HR diagrams—of young star clusters. For Hayashi, NGC 2264 provided the first evidence of a population of contracting stars. In 2012, data from NGC 2264 was re-analyzed to account for dust reddening and extinction. The resulting color–magnitude plot is shown at right. In the upper diagram, the isochrones are curves along which stars of a certain age are expected to lie, assuming that all stars evolve along the Hayashi track. An isochrone is created by taking stars of every conceivable mass, evolving them forwards to the same age, and plotting all of them on the color–magnitude diagram. Most of the stars in NGC 2264 are already on the main sequence (black line), but a substantial population lies between the isochrones for 3.2 million and 5 million years, indicating that the cluster is 3.2–5 million years old and a large population of T Tauri stars is still on their respective Hayashi tracks. Similar results have been obtained for NGC 6530, IC 5146, and NGC 6611. The lower diagram shows Hayashi tracks for various masses, along with T Tauri observations collected from a variety of sources. Note the bold curve to the right, representing a stellar birthline. Even though some Hayashi tracks theoretically extend above the birthline, few stars are above it. In effect, stars are "born" onto the birthline before evolving downwards along their respective Hayashi tracks. The birthline exists because stars formed from overdense cores of giant molecular clouds in an inside-out manner. That is, a small central region first collapses in on itself while the outer shell is still nearly static. The outer envelope then accretes onto the central protostar. Before the accretion is over, the protostar is hidden from view, and therefore not plotted on the color-magnitude diagram. When the envelope finishes accreting, the star is revealed and appears on the birthline.
Physical sciences
Stellar astronomy
Astronomy
545520
https://en.wikipedia.org/wiki/Chard
Chard
Chard or Swiss chard (; Beta vulgaris subsp. vulgaris, Cicla Group and Flavescens Group) is a green leafy vegetable. In the cultivars of the Flavescens Group, the leaf stalks are large and often prepared separately from the leaf blade; the Cicla Group is the leafy spinach beet. The leaf blade can be green or reddish; the leaf stalks are usually white, yellow or red. Chard, like other green leafy vegetables, has highly nutritious leaves. Chard has been used in cooking for centuries, but because it is the same species as beetroot, the common names that cooks and cultures have used for chard may be confusing; it has many common names, such as silver beet, perpetual spinach, beet spinach, seakale beet, or leaf beet. Classification Chard was first described in 1753 by Carl Linnaeus as Beta vulgaris var. cicla. Its taxonomic rank has changed many times: it has been treated as a subspecies, a convariety, and a variety of Beta vulgaris. (Among the numerous synonyms for it are Beta vulgaris subsp. cicla (L.) W.D.J. Koch (Cicla Group), B. vulgaris subsp. cicla (L.) W.D.J. Koch var. cicla L., B. vulgaris var. cycla (L.) Ulrich, B. vulgaris subsp. vulgaris (Leaf Beet Group), B. vulgaris subsp. vulgaris (Spinach Beet Group), B. vulgaris subsp. cicla (L.) W.D.J. Koch (Flavescens Group), B. vulgaris subsp. cicla (L.) W.D.J. Koch var. flavescens (Lam.) DC., B. vulgaris L. subsp. vulgaris (Leaf Beet Group), B. vulgaris subsp. vulgaris (Swiss Chard Group)). The accepted name for all beet cultivars, like chard, sugar beet and beetroot, is Beta vulgaris subsp. vulgaris. They are cultivated descendants of the sea beet, Beta vulgaris subsp. maritima. Chard belongs to the chenopods, which are now mostly included in the family Amaranthaceae (sensu lato). The two rankless cultivar groups for chard are the Cicla Group for the leafy spinach beet and the Flavescens Group for the stalky Swiss chard. Etymology The word "chard" descends from the 14th-century French carde, from Latin carduus meaning artichoke thistle (or cardoon which also includes the artichoke) itself. The origin of the adjective "Swiss" is unclear. Some attribute the name to it having been first described by a Swiss botanist, either Gaspard Bauhin or Karl Koch (although the latter was German, not Swiss). Be it as it may chard is used in Swiss cuisine, e.g. in the traditional dish capuns from the canton of Grisons. Growth and harvesting Chard is a biennial. Clusters of chard seeds are usually sown, in the Northern Hemisphere, between June and October, the exact time depending on the desired harvesting period. Chard can be harvested while the leaves are young and tender, or after maturity when they are larger and have slightly tougher stems. Harvesting is a continual process, as most species of chard produce three or more crops. Cultivars Cultivars of chard include green forms, such as 'Lucullus' and 'Fordhook Giant,' as well as red-ribbed forms, such as 'Ruby Chard' and 'Rhubarb Chard.' The red-ribbed forms are attractive in the garden, but as a general rule, the older green forms tend to outproduce the colorful hybrids. 'Rainbow Chard' is a mix of colored varieties often mistaken for a variety unto itself. Chard has shiny, green, ribbed leaves, with petioles that range in color from white to yellow to red, depending on the cultivar. Chard may be harvested in the garden all summer by cutting individual leaves as needed. In the Northern Hemisphere, chard is typically ready to harvest as early as April and lasts until there is a hard frost, typically below . It is one of the hardier leafy greens, with a harvest season that typically lasts longer than that of kale, spinach, or baby greens. Culinary use Fresh chard can be used raw in salads, stirfries, soups or omelets. The raw leaves can be used like a tortilla wrap. Chard leaves and stalks are typically boiled or sautéed; the bitterness fades with cooking. It is one of the most common ingredients of Croatian cuisine in the Dalmatia region, being known as "queen of the Dalmatian garden" and used in various ways (boiled, in stews, in Soparnik etc.). Nutritional content In a serving, raw Swiss chard provides of food energy and has rich content (> 19% of the Daily Value, DV) of vitamins A, K, and C, with 122%, 1038%, and 50%, respectively, of the DV. Also having significant content in raw chard are dietary fiber, vitamin K and the dietary minerals magnesium, manganese, iron, and potassium. Raw chard has a low content of carbohydrates, protein, and fat. Cooked chard is 93% water, 4% carbohydrates, 2% protein, and contains negligible fat. In a reference 100 g serving, cooked chard supplies 20 calories, with vitamin and mineral contents reduced compared to raw chard, but still present in significant proportions of the DV, especially for vitamin A, vitamin K, vitamin C, and magnesium (see table).
Biology and health sciences
Caryophyllales
null
545523
https://en.wikipedia.org/wiki/Collard%20%28plant%29
Collard (plant)
Collard is a group of loose-leafed cultivars of Brassica oleracea, the same species as many common vegetables including cabbage and broccoli. Part of the Acephala (kale) cultivar group, it is also classified as the variety B. oleracea var. viridis. The plants are grown as a food crop for their large, dark-green, edible leaves, which are cooked and eaten as vegetables. Collard greens have been cultivated as food since classical antiquity. Nomenclature The term colewort is a medieval term for non-heading brassica crops. The term collard has been used to include many non-heading Brassica oleracea crops. While American collards are best placed in the Viridis crop group, the acephala (Greek for 'without a head') cultivar group is also used referring to a lack of close-knit core of leaves (a "head") like cabbage does, making collards more tolerant of high humidity levels and less susceptible to fungal diseases. In Africa, it is known as sukuma (East Africa), muriwo or umBhida (Southern Africa). In Kashmir, it is known as haakh (Kashmir). Description The plant is a biennial where winter frost occurs; some varieties may be perennial in warmer regions. It has an upright stalk, often growing over two feet tall and up to six feet for the Portuguese cultivars. Popular cultivars of collard greens include 'Georgia Southern', 'Vates', 'Morris Heading', 'Blue Max', 'Top Bunch', 'Butter Collard' (couve manteiga), couve tronchuda, and Groninger Blauw. Taxonomy Collard is generally described as part of the Acephala (kale) cultivar group, but is also classified as the variety B. oleracea var. viridis. Cultivation The plant is commercially cultivated for its thick, slightly bitter, edible leaves. They are available year-round, but are tastier and more nutritious in the cold months, after the first frost. For best texture, the leaves are picked before they reach their maximum size, at which stage they are thicker and are cooked differently from the new leaves. Age does not affect flavor. Flavor and texture also depend on the cultivar; the couve manteiga and couve tronchuda are especially appreciated in Brazil and Portugal. The large number of varieties grown in the United States decreased as people moved to towns after World War II, leaving only five varieties commonly in cultivation. However, seeds of many varieties remained in use by individual farmers, growers and seed savers as well as within U.S. government seed collections. In the Appalachian region, cabbage collards, characterized by yellow-green leaves and a partially heading structure are more popular than the dark-green non-heading types in the coastal South. There have been projects from the early 2000s to both preserve seeds of uncommon varieties and also enable more varieties to return to cultivation. Pests The sting nematode, Belonolaimus gracilis and the awl nematode, Dolichodorus spp. are both ectoparasites that can injure collard. Root symptoms include stubby or coarse roots that are dark at the tips. Shoot symptoms include stunted growth, premature wilting, and chlorosis (Nguyen and Smart, 1975). Another species of the sting worm, Belonolaimus longicaudatus, is a pest of collards in Georgia and North Carolina (Robbins and Barker, 1973). B. longicaudatus is devastating to seedlings and transplants. As few as three nematodes per of soil when transplanting can cause significant yield losses on susceptible plants. They are most common in sandy soils (Noling, 2012). The stubby root nematodes Trichodorus and Paratrichodorus attach and feed near the tip of collard's taproots. The damage caused prevents proper root elongation leading to tight mats that could appear swollen, therefore resulting in a "stubby root" (Noling, 2012). Several species of the root knot nematode Meloidogyne spp. infest collards. These include: M. javanica, M. incognita and M. arenaria. Second-stage juveniles attack the plant and settle in the roots. However, infestation seems to occur at lower populations compared to other cruciferous plants. Root symptoms include deformation (galls) and injury that prevent proper water and nutrient uptake. This could eventually lead to stunting, wilting and chlorosis of the shoots. The false root knot nematode Nacobbus aberrans has a wide host range of up to 84 species including many weeds. On Brassicas it has been reported in several states, including Nebraska, Wyoming, Utah, Colorado, Montana, South Dakota, and Kansas (Manzanilla-López et al., 2002). As a pest of collards, the degree of damage is dependent upon the nematode population in the soil. Some collard cultivars exhibit resistance to bacterial leaf blight incited by Pseudomonas cannabina pv. alisalensis (Pca). Uses Nutrition Raw collard greens are 90% water, 6% carbohydrates, 3% protein, and contain negligible fat (table). Like kale, collard greens contain substantial amounts of vitamin K (339% of the Daily Value, DV) in a serving. Collard greens are rich sources (20% or more of DV) of vitamin A, vitamin C, and manganese, and moderate sources of calcium and vitamin B6. A reference serving of cooked collard greens provides of food energy. Culinary East Africa Collard greens are known as sukuma in Swahilli and are one of the most common vegetables in East Africa. Sukuma is mainly lightly sauteed in oil until tender, flavoured with onions and seasoned with salt, and served either as the main accompaniment or as a side dish with meat or fish. In Congo, Tanzania and Kenya (East Africa), thinly sliced collard greens are the main accompaniments of a popular dish known as sima or ugali (made with maize flour). Southern and Eastern Europe Collards have been cultivated in Europe for thousands of years with references to the Greeks and Romans back to the 1st century CE. In Montenegro, Dalmatia and Herzegovina, collard greens, locally known as raštika or raštan, were traditionally one of the staple vegetables. It is particularly popular in the winter, stewed with smoked mutton (kaštradina) or cured pork, root vegetables and potatoes. Known in Turkey as kara lahana ("dark cabbage"), it is a staple in the Black Sea area. It is also an essential ingredient in many Spanish soups and stews, like the pote asturiano, from the Asturian province. United States Collard greens are a staple vegetable in Southern U.S. cuisine. They are often prepared with other similar green leaf vegetables, such as spinach, kale, turnip greens, and mustard greens in the dish called "mixed greens". Typically used in combination with collard greens are smoked and salted meats (ham hocks, smoked turkey drumsticks, smoked turkey necks, pork neckbones, fatback or other fatty meat), diced onions, vinegar, salt, and black pepper, white pepper, or crushed red pepper, and some cooks add a small amount of sugar. Traditionally, collards are eaten on New Year's Day, along with black-eyed peas or field peas and cornbread, to ensure wealth in the coming year. Cornbread is used to soak up the "pot liquor", a nutrient-rich collard broth. Collard greens may also be thinly sliced and fermented to make a collard sauerkraut that is often cooked with flat dumplings. Landrace collard in-situ genetic diversity and ethnobotany are subjects of research for citizen-science groups. During the time of slavery in the U.S., collards were one of the most common plants grown in kitchen gardens and were used to supplement the rations provided by plantation owners. Greens were widely used because the plants could last through the winter weather and could withstand the heat of a southern summer even more so than spinach or lettuce. Broadly, collard greens symbolize Southern culture and African-American culture and identity. For example, jazz composer and pianist, Thelonious Monk, sported a collard leaf in his lapel to represent his African-American heritage. In President Barack Obama's first state dinner, collard greens were included on the menu. Novelist and poet Alice Walker used collards to reference the intersection of African-American heritage and black women. There have been many collard festivals that celebrate African-American identity, including those in Port Wentworth, Georgia (since 1997), East Palo Alto, California (since 1998), Columbus, Ohio (since 2010), and Atlanta, Georgia (since 2011). In 2010, the Latibah Collard Greens Museum opened in Charlotte, North Carolina. Many explorers in the late nineteenth century have written about the pervasiveness of collards in Southern cooking particularly among black Americans. In 1869, Hyacinth, a traveler during the Civil War, for example, observed that collards could be found anywhere in the south. In 1972, another observer, Stearns, echoed that sentiment claiming that collards were present in every black Southerner's garden. In 1883, a writer commented on the fact that there is no word or dish more popular among poorer whites and blacks than collard greens. The collard sandwich—consisting of fried cornbread, collard greens, and fatback—is a popular dish among the Lumbee people in Robeson County, North Carolina. Brazil and Portugal In Portuguese and Brazilian cuisine, collard greens (or couve) are a common accompaniment to fish and meat dishes. They make up a standard side dish for feijoada, a popular pork and beans-style stew. These Brazilian and Portuguese cultivars are likely members of a distinct non-heading cultivar group of Brassica oleracea, specifically the Tronchuda Group. Thinly-sliced collard greens are also a main ingredient of a popular Portuguese soup, the caldo verde ("green broth"). For this broth, the leaves are sliced into strips, wide (sometimes by a grocer or market vendor using a special hand-cranked slicer) and added to the other ingredients 15 minutes before it is served. Kashmir Valley In Kashmir, collard greens (locally called haakh) are included in most meals. Leaves are harvested by pinching in early spring when the dormant buds sprout and give out tender leaves known as kaanyil haakh. When the extending stem bears alternate leaves in quick succession during the growing season, older leaves are harvested periodically. In late autumn, the apical portion of the stem is removed along with the whorled leaves. There are several dishes made with haakh. A common dish eaten with rice is haak rus, a soup of whole collard leaves cooked simply with water, oil, salt, green chilies and spices. Zimbabwe In Zimbabwe, collard greens are known as in Ndebele and in Shona. Due to the climate, the plant thrives under almost all conditions, with most people growing it in their gardens. It is commonly eaten with sadza (ugali in East Africa, pap in South Africa, fufu in West Africa and polenta in Italy) as part of the staple food. is normally wilted in boiling water before being fried and combined with sautéed onions or tomato. Some (more traditionally, the Shona people) add beef, pork and other meat to the mix for a type of stew. Most people eat on a regular basis in Zimbabwe, as it is economical and can be grown with little effort in home gardens. In literature Collard greens are often mentioned in literature from the American South. William Faulkner mentions collard greens as part of a Southern meal in his novel Intruder In the Dust. Walker Percy mentions collard greens in his 1983 short story "The Last Donohue Show." Collards appear in Clyde Edgerton's novel Lunch at the Piccadilly. In the novel Gone With the Wind, hungry protagonist Scarlett O'Hara wistfully remembers a pre-Civil War meal that included "collards swimming richly in pot liquor iridescent with grease." In Flannery O'Connor's short story A Stroke of Good Fortune, the main character is an unhappy working-class woman who reluctantly cooks collard greens for her brother, which she finds rustic.
Biology and health sciences
Brassicales
null
545620
https://en.wikipedia.org/wiki/Celeriac
Celeriac
Celeriac (Apium graveolens Rapaceum Group, synonyms Apium graveolens Celeriac Group and Apium graveolens var. rapaceum), also called celery root, knob celery, and turnip-rooted celery (although it is not a close relative of the turnip), is a group of cultivars of Apium graveolens cultivated for their edible bulb-like hypocotyl, and shoots. Celeriac is widely cultivated in the Mediterranean Basin and in Northern Europe. It is also but less commonly cultivated in North Africa, Siberia, Southwest Asia, and North America. History Wild celery (Apium graveolens), from which both celeriac and celery derive, originated in Europe and the Mediterranean Basin. It was mentioned in the Iliad and Odyssey as selinon. Celeriac was grown as a medicinal crop in some early civilizations. Culinary use Typically, celeriac is harvested when its hypocotyl is in diameter. This is white on the inside, and can be kept for months in winter. It often serves as a key ingredient in soup. It can also be shredded and used in salads. The leaves are used as seasoning; the small, fibrous stalks find only marginal use. The shelf life of celeriac is approximately six to eight months if stored between , and not allowed to dry out. However, the vegetable will tend to rot through the centre if the finer stems surrounding the base are left attached. The centre of celeriac becomes hollow as it ages, though even freshly harvested celeriacs can have a small medial hollow. The freshness will also be obvious from the taste; the older it is, the weaker the celery flavour.
Biology and health sciences
Other vegetables
Plants
545824
https://en.wikipedia.org/wiki/Goosefish
Goosefish
Goosefishes, sometimes called anglers or monkfishes, are a family, the Lophiidae, of marine ray-finned fishes belonging to the order Lophiiformes, the anglerfishes. The family includes 30 recognized species. These fishes are found in all the world's oceans except for the Antarctic Ocean. Taxonomy The goosefish family, Lophiidae, was first proposed as a genus in 1810 by the French polymath and naturalist Constantine Samuel Rafinesque. The Lophiidae is the only family in the monotypic suborder Lophioidei, this is one of 5 suborders of the Lophiiformes. The Lophioidei is considered to be the most basal of the suborders in the order. Etymology The goosefish family, Lophiidae, takes its name from its type genus, Lophius. Lophius means "mane" and is presumably a reference to the first 3 spines of the first dorsal fin which are tentacle like, with 3 smaller spines behind them. Genera The goosefish family, Lophiidae, contains the following extant genera: Fossil taxa The following extinct taxa are also among those included in the family Lophiidae: Genus Caruso Carnevale and Pietsch, 2012 Caruso brachysomus Agassiz, 1835 Genus Emmachaere D. S. Jordan and Gilbert, 1919 Emmachaere rhomalea D. S. Jordan, 1921 Genus Eosladenia Bannikov, 2004 Eosladenia caucasica Bannikov, 2004 Genus Sharfia Peitsch & Carnevale, 2011 Sharfia mirabilis Pietsch & Carnevale, 2011 Characteristics Goosefishes in the family Lophiidae have flattened heads and bodies covered in thin skin and are further characterised by the possession of pelvic fins with the first, spiny dorsal fin having its origin close to the rear of the head and this fin is supported by between one and three spines. The frontmost spine, the illicium, has a flap of flesh, the esca, at its tip and is used as a lure to attract prey to within reach of the cavernous mouth. There are 4 pharyngobranchials, the 4th being toothed, and they have a large pseudobranch. The body has no scales and the frontal bones of the skull are fused. They have a very wide, flattened head, although Sladenia has a more rounded head, with well developed teeth. The lower jaw has a fringe of small flaps along its edge and these extend along the head onto the flanks. The second dorsal fin is supported by between 8 and 12 soft rays while the anal fin contains between 6 and 10 soft rays. Most taxa have 18 or 19 vertebrae but in Lophius this count is between 26 and 31. The opening to the gills os located to the rear of the pectoral fin base. The largest species in the family is the angler (Lophius piscatorius) which has a maximum published standard length of while the smallest is Lophiodes fimbriatus with a maximum published standard length of . Distribution The goosefishes, family Lophiidae are found in the temperate, tropical, and subtropical Atlantic Indian and Pacific Oceans. Habitat and biology The goosefishes are typically found on soft substrates on the continental margin, most frequently at depths greater than , and there are species whichhave been found at depths greater than . A few species, such as the American angler (Lophius americanus) are found in shallower waters, sometimes moving into bays and estuaries with high-salinity water in the winter. At least in the genus Lophius the females release their spawn enclosed within a gelatinous mass, which has been compared to the spawn of toads in appearance, which floats. They have pelagic eggs and larvae with demersal juveniles and benthic adults. Utilisation Goosefishes, particularly several of the large species in the genus Lophius, commonly known as monkfishes in northern Europe, are important commercially fished species. The liver of monkfish, known as ankimo, is considered a delicacy in Japan.
Biology and health sciences
Acanthomorpha
Animals
546415
https://en.wikipedia.org/wiki/Extrapolation
Extrapolation
In mathematics, extrapolation is a type of estimation, beyond the original observation range, of the value of a variable on the basis of its relationship with another variable. It is similar to interpolation, which produces estimates between known observations, but extrapolation is subject to greater uncertainty and a higher risk of producing meaningless results. Extrapolation may also mean extension of a method, assuming similar methods will be applicable. Extrapolation may also apply to human experience to project, extend, or expand known experience into an area not known or previously experienced. By doing so, one makes an assumption of the unknown (for example, a driver may extrapolate road conditions beyond what is currently visible and these extrapolations may be correct or incorrect). The extrapolation method can be applied in the interior reconstruction problem. Method A sound choice of which extrapolation method to apply relies on a priori knowledge of the process that created the existing data points. Some experts have proposed the use of causal forces in the evaluation of extrapolation methods. Crucial questions are, for example, if the data can be assumed to be continuous, smooth, possibly periodic, etc. Linear Linear extrapolation means creating a tangent line at the end of the known data and extending it beyond that limit. Linear extrapolation will only provide good results when used to extend the graph of an approximately linear function or not too far beyond the known data. If the two data points nearest the point to be extrapolated are and , linear extrapolation gives the function: (which is identical to linear interpolation if ). It is possible to include more than two points, and averaging the slope of the linear interpolant, by regression-like techniques, on the data points chosen to be included. This is similar to linear prediction. Polynomial A polynomial curve can be created through the entire known data or just near the end (two points for linear extrapolation, three points for quadratic extrapolation, etc.). The resulting curve can then be extended beyond the end of the known data. Polynomial extrapolation is typically done by means of Lagrange interpolation or using Newton's method of finite differences to create a Newton series that fits the data. The resulting polynomial may be used to extrapolate the data. High-order polynomial extrapolation must be used with due care. For the example data set and problem in the figure above, anything above order 1 (linear extrapolation) will possibly yield unusable values; an error estimate of the extrapolated value will grow with the degree of the polynomial extrapolation. This is related to Runge's phenomenon. Conic A conic section can be created using five points near the end of the known data. If the conic section created is an ellipse or circle, when extrapolated it will loop back and rejoin itself. An extrapolated parabola or hyperbola will not rejoin itself, but may curve back relative to the X-axis. This type of extrapolation could be done with a conic sections template (on paper) or with a computer. French curve French curve extrapolation is a method suitable for any distribution that has a tendency to be exponential, but with accelerating or decelerating factors. This method has been used successfully in providing forecast projections of the growth of HIV/AIDS in the UK since 1987 and variant CJD in the UK for a number of years. Another study has shown that extrapolation can produce the same quality of forecasting results as more complex forecasting strategies. Geometric Extrapolation with error prediction Can be created with 3 points of a sequence and the "moment" or "index", this type of extrapolation have 100% accuracy in predictions in a big percentage of known series database (OEIS). Example of extrapolation with error prediction : Quality Typically, the quality of a particular method of extrapolation is limited by the assumptions about the function made by the method. If the method assumes the data are smooth, then a non-smooth function will be poorly extrapolated. In terms of complex time series, some experts have discovered that extrapolation is more accurate when performed through the decomposition of causal forces. Even for proper assumptions about the function, the extrapolation can diverge severely from the function. The classic example is truncated power series representations of sin(x) and related trigonometric functions. For instance, taking only data from near the x = 0, we may estimate that the function behaves as sin(x) ~ x. In the neighborhood of x = 0, this is an excellent estimate. Away from x = 0 however, the extrapolation moves arbitrarily away from the x-axis while sin(x) remains in the interval [−1,1]. I.e., the error increases without bound. Taking more terms in the power series of sin(x) around x = 0 will produce better agreement over a larger interval near x = 0, but will produce extrapolations that eventually diverge away from the x-axis even faster than the linear approximation. This divergence is a specific property of extrapolation methods and is only circumvented when the functional forms assumed by the extrapolation method (inadvertently or intentionally due to additional information) accurately represent the nature of the function being extrapolated. For particular problems, this additional information may be available, but in the general case, it is impossible to satisfy all possible function behaviors with a workably small set of potential behavior. In the complex plane In complex analysis, a problem of extrapolation may be converted into an interpolation problem by the change of variable . This transform exchanges the part of the complex plane inside the unit circle with the part of the complex plane outside of the unit circle. In particular, the compactification point at infinity is mapped to the origin and vice versa. Care must be taken with this transform however, since the original function may have had "features", for example poles and other singularities, at infinity that were not evident from the sampled data. Another problem of extrapolation is loosely related to the problem of analytic continuation, where (typically) a power series representation of a function is expanded at one of its points of convergence to produce a power series with a larger radius of convergence. In effect, a set of data from a small region is used to extrapolate a function onto a larger region. Again, analytic continuation can be thwarted by function features that were not evident from the initial data. Also, one may use sequence transformations like Padé approximants and Levin-type sequence transformations as extrapolation methods that lead to a summation of power series that are divergent outside the original radius of convergence. In this case, one often obtains rational approximants. Fast The extrapolated data often convolute to a kernel function. After data is extrapolated, the size of data is increased N times, here N is approximately 2–3. If this data needs to be convoluted to a known kernel function, the numerical calculations will increase Nlog(N) times even with fast Fourier transform (FFT). There exists an algorithm, it analytically calculates the contribution from the part of the extrapolated data. The calculation time can be omitted compared with the original convolution calculation. Hence with this algorithm the calculations of a convolution using the extrapolated data is nearly not increased. This is referred as the fast extrapolation. The fast extrapolation has been applied to CT image reconstruction. Extrapolation arguments Extrapolation arguments are informal and unquantified arguments which assert that something is probably true beyond the range of values for which it is known to be true. For example, we believe in the reality of what we see through magnifying glasses because it agrees with what we see with the naked eye but extends beyond it; we believe in what we see through light microscopes because it agrees with what we see through magnifying glasses but extends beyond it; and similarly for electron microscopes. Such arguments are widely used in biology in extrapolating from animal studies to humans and from pilot studies to a broader population. Like slippery slope arguments, extrapolation arguments may be strong or weak depending on such factors as how far the extrapolation goes beyond the known range.
Mathematics
Basics
null
546778
https://en.wikipedia.org/wiki/Moorish%20idol
Moorish idol
The Moorish idol (Zanclus cornutus) is a species of marine ray-finned fish belonging to the family Zanclidae. It is the only member of the monospecific genus Zanclus and the only extant species within the Zanclidae. This species is found on reefs in the Indo-Pacific region. Taxonomy The Moorish idol was first formally described as Chaetodon cornutus in 1758 by Carl Linnaeus in the 10th edition of the Systema Naturae with "Indian Seas" given as its type locality. In 1831 Georges Cuvier classified it in the new monospecific genus Zanclus. In 1876 Pieter Bleeker proposed the monotypic family Zanclidae. The Zanclidae is classified within the suborder Acanthuroidei of the order Acanthuriformes. Some authors classify the Moorish idols in the surgeonfish family Acanthuridae but the absence of spines on the caudal peduncle is a clear difference between this species and the surgeonfishes. Eozanclus brevirostris is an extinct species in the Zanclidae family that was first described by Giovanni Serafino Volta in 1796. The species later received separate taxonomic status within the Zanclidae family through the description of Blot and Voruz in 1970 and 1975. Recently, a new extinct species has joined the Zanclidae family. The species, Angiolinia mirabilis, was described by Giorgio Carnevale and James C. Tyler in 2024 based on three specimens found in Bolca, Italy. Carnevale & Tyler found that Zanclus cornutus and Angiolinia mirabilis form a derived clade distinguishable from Eozanclus brevirostris by one supernumerary spine on the first dorsal-fin pterygiophore, one uroneural in the caudal skeleton, and distally filamentous dorsal-fin spines (except the first two fins). Etymology Moorish idol's unusual name was apparently given to it because, in some areas of south-east Asia, fishermen have respect for these fishes, releasing them when caught and honouring them with a bow after their release. In this case, Moor being erroneously used as it usually refers to Amazigh people from Morocco where this fish does not occur in the wild. The genus name Zanclus is derived from the Ancient Greek word zanklon, meaning "sickle", and is an allusion to the long curved dorsal fin. The specific name, cornutus, means "horned", and refers to the small bony protuberances over the eyes. Description The Moorish idol's body is highly compressed and disc-like in shape with a tube-like snout and small bony protuberances above the eyes in adults. The mouth is small and has many long, bristle like teeth. There are no spines or serrations on the preoperculum or caudal peduncle. The dorsal fin is supported by 6 or 7 spines, which are elongated into a long filament which resembles a whip, and between 39 and 45 soft rays. The anal fin contains 3 spines and between 31 and 37 soft rays. The maximum published total length is , although is more typical. They have a white background color, with two wide black vertical bands on the body with a yellow patch on the posterior end of the body and a yellow saddle on the snout. The caudal fin is black with a white margin. Like their closely related family, the Acanthuridae, the Moorish idol has larvae that are specialized for a long pelagic life stage. In acanthuridae, the pelagic, pre-juvenile stage larvae can reach lengths of 60 mm before settling in their habitat. The Moorish idol’s various larval stages have been described and illustrated. The preflexion larval stage refers to the stage from hatchling to the start of upward flexion of the notochord. The preflexion larval stage of the Moorish idol has no fin spines, soft rays, or internal support structures for the fins. However, in a 3.2 mm specimen there is the start of the dorsal and anal fins. The larger preflexion specimens have mostly cartilaginous supraoccipital crests with 23 to 26 curved dorsal spines. Also, pigmentation increases with size in the preflexion larvae. The postflexion larval stage refers to the stage that includes the formation of the caudal fin and fin rays. This is the stage right before juvenile and settlement into their habitat. In the postflexion stage, the Moorish idol larvae have fully developed fins, their body form is compressed and deep-bodied. The larvae have a small terminal mouth and kite shaped body. They have seven dorsal spines at the start of their dorsal fin that are covered in small spines.Their third dorsal spine is very long (about 1.2x their standard length). They also have one pelvic fin spine and three anal fin spines covered in small spines. Distribution and habitat The Moorish idol has a wide range in the Indian and Pacific Oceans. They are found from the eastern coast of Africa between Somalia and South Africa east to Hawaii and Easter Island. They are also found in the eastern Pacific from the southern Gulf of California to Peru, including many islands such as the Galapagos and Cocos Island. The Moorish idol lives between depths of 1 to 180 m in turbid lagoons, reef flats, and clear rocky and coral reefs. As with many reef fishes, habitat has been found to be an important and influential factor in the abundance of Moorish idols. Conservation Status Since their last assessment in 2015, the Moorish idol is listed as a species of least concern by the IUCN. They were found to be widely distributed and locally abundant with no major threats to the species. However, their habitat type, particularly coral reefs, are known to be in decline due to climate change. On a bright note, the species has been found to do well in restored coral reefs and artificial reef structures. Biology Although, Moorish idols are omnivores, they mostly feed on sponges, as well as, algae, coral polyps, tunicates and other benthic invertebrates. A gut content analysis study shows that sponges make up about 70% of the total weight consumed by Moorish idols. They are normally found in small groups of 2 or 3 individuals but they can also be solitary or gather in large schools. They have a long pelagic larval stage and this is why they are so widespread and geographically uniform. These fish are pelagic spawners. The males and females release sperm and eggs into the water, and the eggs drift away on the current following fertilization. In the aquarium Moorish idols are notoriously difficult to maintain in captivity. They require large tanks, often exceeding , are voracious eaters, and can become destructive. Some aquarists prefer to keep substitute species that look very similar to the Moorish idol. These substitutes are all butterflyfishes of the genus Heniochus and include the pennant coralfish, H. acuminatus; threeband pennantfish, H. chrysostomus and the false Moorish idol, H. diphreutes. In captivity, Moorish idols typically are very picky eaters. They will either eat no food and perish, or eat everything all at once. In popular culture In the 2003 Disney/Pixar animated movie Finding Nemo, a Moorish idol fish named Gill, voiced by Willem Dafoe, is one of Nemo's tank mates and the leader of the Tank Gang. Gill was depicted having a very strong desire for freedom outside of the aquarium and was constantly scheming to achieve this, possibly alluding to the difficulty of keeping real-life Moorish idols in captivity. Gill and the other members of the Tank Gang appeared in the 2016 sequel, Finding Dory, in a post credits scene. Moorish idols have long been among the most recognizable of coral reef fauna. Their image has graced all types of products, such as: shower curtains, blankets, towels and wallpaper made with an ocean or underwater theme.
Biology and health sciences
Acanthomorpha
Animals
547111
https://en.wikipedia.org/wiki/Breechloader
Breechloader
A breechloader is a firearm in which the user loads the ammunition from the breech end of the barrel (i.e., from the rearward, open end of the gun's barrel), as opposed to a muzzleloader, in which the user loads the ammunition from the (muzzle) end of the barrel. The vast majority of modern firearms are generally breech-loaders, while firearms made before the mid-19th century were mostly smoothbore muzzle-loaders. Only a few muzzleloading weapons, such as mortars, rifle grenades, some rocket launchers, such as the Panzerfaust 3 and RPG-7, and the GP series grenade launchers, have remained in common usage in modern military conflicts. However, referring to a weapon explicitly as breech-loading is mostly limited to weapons where the operator loads ammunition by hand (and not by operating a mechanism such as a bolt-action), such as artillery pieces or break-action small arms. Breech-loading provides the advantage of reduced reloading time because it is far quicker to load the projectile and propellant into the chamber of a gun or cannon than to reach all the way over to the front end to load ammunition and then push them back down a long tube – especially when the projectile fits tightly and the tube has spiral ridges from rifling. In field artillery, the advantages were similar – crews no longer had to get in front of the gun and pack ammunition in the barrel with a ramrod, and the shot could now tightly fit the bore, greatly increasing its power, range, and accuracy. It also made it easier to load a previously fired weapon with a fouled barrel. Gun turrets and emplacements for breechloaders can be smaller since crews don't need to retract the gun for loading into the muzzle end. Unloading a breechloader is much easier as well, as the ammunition can be unloaded from the breech end and is often doable by hand; unloading muzzle loaders requires drilling into the projectile to drag it out through the whole length of the barrel, and in some cases the guns are simply fired to facilitate unloading process. The advent of breech-loading gave a significant increase to effective firepower by its own right, and also enabled further revolutions in firearm designs such as repeating and self-loading firearms. History Although breech-loading firearms were developed as far back as the early 14th century in Burgundy and various other parts of Europe, breech-loading became more successful with improvements in precision engineering and machining in the 19th century. The main challenge for developers of breech-loading firearms was sealing the breech. This was eventually solved for smaller firearms by the development of the self-contained metallic cartridge in the mid-19th century. For firearms too large to use cartridges, the problem was solved by the development of the interrupted screw. Swivel guns Breech-loading swivel guns were invented in the 14th century. They were a particular type of swivel gun, and consisted in a small breech-loading cannon equipped with a swivel for easy rotation, loaded by inserting a mug-shaped chamber already filled with powder and projectiles. The breech-loading swivel gun had a high rate of fire, and was especially effective in anti-personnel roles. Firearms Breech-loading firearms are known from the 16th century. Henry VIII possessed one, which he apparently used as a hunting gun to shoot birds. Meanwhile, in China, an early form of breech-loading musket, known as the Che Dian Chong, was known to have been created in the second half of the 16th century for the Ming dynasty's arsenals. Like all early breech-loading fireams, gas leakage was a limitation and danger present in the weapon's mechanism. More breech-loading firearms were made in the early 18th century. One such gun known to have belonged to Philip V of Spain, and was manufactured circa 1715, probably in Madrid. It came with a ready-to load reusable cartridge. Patrick Ferguson, a British Army officer, developed in 1772 the Ferguson rifle, a breech-loading flintlock firearm. Roughly two hundred of the rifles were manufactured and used in the Battle of Brandywine, during the American Revolutionary War, but shortly after they were retired and replaced with the standard Brown Bess musket. In turn the American army, after getting some experience with muzzle-loaded rifles in the late 18th century, adopted the second standard breech-loading firearm in the world, M1819 Hall rifle, and in larger numbers than the Ferguson rifle. About the same time and later on into the mid-19th century, there were attempts in Europe at an effective breech-loader. There were concentrated attempts at improved cartridges and methods of ignition. In Paris in 1808, in association with French gunsmith François Prélat, Jean Samuel Pauly created the first fully self-contained cartridges: the cartridges incorporated a copper base with integrated mercury fulminate primer powder (the major innovation of Pauly), a round bullet and either brass or paper casing. The cartridge was loaded through the breech and fired with a needle. The needle-activated central-fire breech-loading gun would become a major feature of firearms thereafter. The corresponding firearm was also developed by Pauly. Pauly made an improved version, which was protected by a patent on 29 September 1812. The Pauly cartridge was further improved by the French gunsmith Casimir Lefaucheux in 1828, by adding a pinfire primer, but Lefaucheux did not register his patent until 1835: a pinfire cartridge containing powder in a cardboard shell. In 1845, another Frenchman Louis-Nicolas Flobert invented, for indoor shooting, the first rimfire metallic cartridge, constituted by a bullet fit in a percussion cap. Usually derived in the 6 mm and 9 mm calibres, it is since then called the Flobert cartridge but it does not contain any powder; the only propellant substance contained in the cartridge is the percussion cap itself. In English-speaking countries the Flobert cartridge corresponds to the .22 BB and .22 CB ammunitions. In 1846, yet another Frenchman, Benjamin Houllier, patented the first fully metallic cartridge containing powder in a metallic shell. Houllier commercialised his weapons in association with the gunsmiths Blanchard or Charles Robert. But the subsequent Houllier and Lefaucheux cartridges, even if they were the first full-metal shells, were still pinfire cartridges, like those used in the LeMat (1856) and Lefaucheux (1858) revolvers, although the LeMat also evolved in a revolver using rimfire cartridges. The first centrefire cartridge was introduced in 1855 by Pottet, with both Berdan and Boxer priming. In 1842, the Norwegian Armed Forces adopted the breech-loading caplock, the Kammerlader, one of the first instances in which a modern army widely adopted a breech-loading rifle as its main infantry firearm. The Dreyse Zündnadelgewehr (Dreyse needle gun) was a single-shot breech-loading rifle using a rotating bolt to seal the breech. It was so called because of its .5-inch needle-like firing pin, which passed through a paper cartridge case to impact a percussion cap at the bullet base. It began development in the 1830s under Johann Nicolaus von Dreyse and eventually an improved version of it was adopted by Prussia in the late 1840s. The paper cartridge and the gun had numerous deficiencies; specifically, serious problems with gas leaking. However, the rifle was used to great success in the Prussian army in the Austro-Prussian war of 1866. This, and the Franco-Prussian war of 1870–71, eventually caused much interest in Europe for breech-loaders and the Prussian military system in general. In 1860, the New Zealand government petitioned the Colonial Office for more soldiers to defend Auckland. The bid was unsuccessful and the government began instead making inquiries to Britain to obtain modern weapons. In 1861 they placed orders for the Calisher and Terry carbine, which used a breech-loading system using a bullet consisting of a standard Minié lead bullet in .54 calibre backed by a charge and tallowed wad, wrapped in nitrated paper to keep it waterproof. The carbine had been issued in small numbers to English cavalry (Hussars) from 1857. About 3–4,000 carbines were brought into New Zealand a few years later. The carbine was used extensively by the Forest Rangers, an irregular force led by Gustavus von Tempsky that specialized in bush warfare and reconnaissance. Von Tempsky liked the short carbine, which could be loaded while lying down. The waterproofed cartridge was easier to keep dry in the New Zealand bush. Museums in New Zealand hold a small number of these carbines in good condition. During the American Civil War, at least nineteen types of breech-loaders were fielded. The Sharps used a successful dropping block design. The Greene used rotating bolt-action, and was fed from the breech. The Spencer, which used lever-actuated bolt-action, was fed from a seven-round detachable tube magazine. The Henry and Volcanic used rimfire metallic cartridges fed from a tube magazine under the barrel. These held a significant advantage over muzzle-loaders. The improvements in breech-loaders had spelled the end of muzzle-loaders. To make use of the enormous number of war surplus muzzle-loaders, the Allin conversion Springfield was adopted in 1866. General Burnside invented a breech-loading rifle before the war, the Burnside carbine. The French adopted the new Chassepot rifle in 1866, which was much improved over the Dreyse needle gun as it had dramatically fewer gas leaks due to its de Bange sealing system. The British initially took the existing Enfield and fitted it with a Snider breech action (solid block, hinged parallel to the barrel) firing the Boxer cartridge. Following a competitive examination of 104 guns in 1866, the British decided to adopt the Peabody-derived Martini-Henry with trap-door loading in 1871. Single-shot breech-loaders would be used throughout the latter half of the 19th Century, but were slowly replaced by various designs for repeating rifles, first used in the American Civil War. Manual breech-loaders gave way to manual magazine feed and then to self-loading rifles. Artillery The first modern breech-loading rifled gun is a breech-loader invented by Martin von Wahrendorff with a cylindrical breech plug secured by a horizontal wedge in 1837. In the 1850s and 1860s, Whitworth and Armstrong invented improved breech-loading artillery. The M1867 naval guns produced in Imperial Russia at the Obukhov State Plant used Krupp technology. Breech mechanism A breech action is the loading sequence of a breech loading naval gun or small arm. The earliest breech actions were either three-shot break-open actions or a barrel tip-down, remove the plug and reload actions. The later breech-loaders included the Ferguson rifle, which used a screw-in/screw out action to reload, and the Hall rifle, which tipped up at 30 degrees for loading. The better breech loaders, however, used percussion caps, including the Sharps rifle, using a falling block (or sliding block) action to reload. And then later on came the Dreyse needle gun that used a moving seal (bolt) to seal and expose the breech. Later on, however, the Mauser M71/84 rifle used self-contained metallic cartridges and used a rotating bolt to open and close the breech.
Technology
Mechanisms_2
null
547497
https://en.wikipedia.org/wiki/Gateshead%20Millennium%20Bridge
Gateshead Millennium Bridge
The Gateshead Millennium Bridge is a pedestrian and cyclist tilt bridge spanning the River Tyne between Gateshead arts quarter on the south bank and Newcastle upon Tyne's Quayside area on the north bank. It was the first tilting bridge ever to be constructed. Opened for public use in 2001, the award-winning structure was conceived and designed by architectural practice WilkinsonEyre and structural engineering firm Gifford. The bridge is sometimes called the 'Blinking Eye Bridge' or the 'Winking Eye Bridge' due to its shape and its tilting method. The Millennium Bridge stands as the twentieth tallest structure in the city, and is shorter in stature than the neighbouring Tyne Bridge. History Historical context Gateshead Millennium Bridge is part of a long history of bridges built across the River Tyne, the earliest of which was constructed in the Middle Ages. As quay-based industries grew during the Industrial Revolution and Victorian era due to its accessible port, the area became more prosperous. However, industry declined along the River Tyne following World War II and the quay deteriorated into the 1980s. This prompted regeneration activities in both Newcastle and Gateshead, beginning with the construction of Newcastle Law Courts on the riverbank. In 1995, Gateshead Council devised plans to develop a new contemporary arts centre, the Baltic Centre for Contemporary Art, and the need for a footbridge to link the two cities became more apparent. Conception A competition was held by Gateshead Council in 1996 to design a new bridge to link Gateshead to Newcastle, the first opening bridge to be built on the River Tyne in over 100 years. The bridge would form part of the regeneration on both sides of the River Tyne, providing a crossing between new commercial buildings and housing built in Newcastle and cultural and leisure developments in Gateshead. It would also facilitate a circular promenade around the Quayside. Although river-based traffic had decreased by the 21st century, the cities of Gateshead and Newcastle still intended to retain the image of the River Tyne as a working river. The advert for the competition was published in the New Civil Engineer magazine with the brief "We are looking for design teams who can create a stunning, but practical, river level crossing which fits this historic setting, opens for shipping and is good enough to win Millennium Commission funding." There were over 150 entries and Gateshead residents voted for their favourite out of a shortlist of six architectural teams. WilkinsonEyre and Gifford and Partners claimed the prize in February 1997 with Gateshead Councillor Mick Henry remarking that the design was "something very special." By July 1997, a final design was under preparation for submission to the Millennium Commission in order to secure funding. The bridge, which is the world's first tilting bridge, ultimately cost £22million, with funding from the Millennium Commission, the European Regional Development Fund, English Partnerships, East Gateshead Single Regeneration Budget, and Gateshead Council. By this point, the name of the bridge was still undecided. The original proposed name of 'Baltic Millennium Bridge' (in reference to the adjacent Baltic Centre for Contemporary Art on the Gateshead side) was objected to by Newcastle City Council. In response, Gateshead Council decided upon the final name of 'Gateshead Millennium Bridge' in 1998, which caused an ongoing feud between the two councils. Opening Gateshead Council originally announced that the bridge would be open in September 2000, but it was not completed until September the following year. The first tilt took place on 28 June 2001 to 36,000 onlookers. It was opened to the public on 17 September 2001 to a crowd of thousands. The barrier lifted at 2pm to allow the first public crossing, and the first people to cross received a commemorative medal gift from the Council. The bridge was dedicated by Queen Elizabeth II on 7 May 2002, during her Golden Jubilee tour. A commemorative plaque unveiled by the Queen reads: "Gateshead Millennium Bridge. Opened by Her Majesty The Queen on 7th May 2002." Before a formal dinner at the Baltic Centre for Contemporary Art, the Queen said "Today I see the tangible signs of the determination of all those within this region to create a new future. There have been so many personal acts of kindness, especially over the last two months, now I have the chance to express my gratitude to the people of the North East." Structure Design Gateshead Millennium Bridge was constructed to fulfil the following main design constraints: the bridge must be above river-level during high spring tides when closed; nothing must be built on the Gateshead Quayside; the deck must have no more than a 1:20 slope to allow disabled access. The bridge consists of two steel arches – a deck which acts as the pedestrian and cycle path, and a supporting arch. The bridge was designed to be as light as possible to allow for easy opening and closing, so the two arches are lighter towards the centre span than at the hinges. The pedestrian and cycle deck is a parabolic shape with a vertical camber. It is divided into two separate paths on different levels for the different modes of transport, separated by a stainless steel "hedge" with seating areas and steps interspersed throughout. The supporting arch is also a parabola, designed in such a way as to match the shape of the Tyne Bridge upstream. The two arches are joined together by 18 suspension cables which provide stability for people crossing the bridge. Six hydraulic rams (three on each side) tilt the entire 850,000kg bridge as a single structure, meaning that when the supporting arch lowers, the pedestrian deck rises to create of clearance for river traffic to pass underneath. The bridge takes around four minutes to rotate through the full 40° from closed to open, moving as fast as per second. The design is so energy-efficient that, , it cost just £3.96 per opening. The appearance of the bridge during this manoeuvre has led to it being nicknamed the "Blinking Eye Bridge", and has solidified its reputation as being not only a functional piece of infrastructure but a spectacle in and of itself. The rotation of the bridge is also used as a self-cleaning mechanism, as rubbish collected on the deck rolls towards traps built at each end. A lighting system designed by Jonathan Spiers and Associates is used at night to attractively illuminate the bridge without causing light pollution, as the cables are too thin to be visible or reflect light at night. The lights shine white during the week and a variety of colours over the weekend. Green and red LEDs are used during the day to alert cyclists and pedestrians to the bridge's opening and closing. Construction and installation Gateshead Council selected Gateshead-based Harbour & General as the main contractor for the construction of the bridge. Harbour and General then selected over 12 sub-contractors to cover elements of construction including control systems, metalwork, lighting, and piling and river work. Consulting engineering group Ramboll provided further engineering, construction, and contract management services. The bridge's structure was modelled in LUSAS using 3D elements. LUSAS modelling allowed a model of the bridge to be built and allowed analysis of buckling forces, wind, and temperature. Another software – Pertmaster Professional – was used for risk and project management and cost analysis. Watson Steel was appointed as the specialist contractor to prefabricate the bridge, and they subcontracted the design of the hydraulic system to Kvaerner Markham. The pre-fabricated sections of the bridge were shot-blasted and painted in Hadrian's Yard, from the bridge's final position. The entire structure was assembled by first welding together the nine arch sections and deck sections, and then attaching the cables to the arch and deck. Protective paintwork (Interzone 505 and Interthane 990 from International Protective Coatings) was applied to the arch before it was erected. The bridge was lifted into place in one piece by the Asian Hercules II, one of the world's largest floating cranes, on 20 November 2000. Whilst being transported by the crane, the bridge was rotated 90° in order to navigate narrow bends along the river. It was successfully slotted into threaded bolts in the piers with only of tolerance. Handrails, seating, and the hydraulic systems were installed after the bridge was in place. The transportation of the bridge took only one day and was a spectacle, attracting crowds of onlookers. The Port of Tyne Authority required the design of the bridge to incorporate a vessel collision protection system. As a result, two rows of parallel fixed piles, splaying out diagonally on each side of the bridge, were installed. However, it became clear to members of the construction project team and WilkinsonEyre that they were unsightly and undermined "the finesse of the bridge". Between February and June 2000, the unsightly nature of the piles also caught the attention of the public, with multiple news articles and letters expressing discontent. Complaints pointed out that the Millennium Bridge in London did not have similar piles, and that a Newcastle University boat race had to be moved specifically to avoid potential collision with the piles. Over time, Gateshead Council and the Harbourmaster noted that the piles were not required and they were removed in 2012. This decision was ultimately less expensive than maintaining them. Regional and cultural significance Gateshead Millennium Bridge has retained its status as a significant local landmark and tourist attraction, not only built to develop the local area but also to establish local pride. It is one of several cultural landmarks on Gateshead Quays, including Baltic Centre for Contemporary Art and Sage Gateshead. It opens periodically for sightseers and for major events such as The Boat Race of the North and the Cutty Sark Tall Ships' Race. The bridge also lights up to mark celebrations or dedications. For example, it was lit blue on 4 July 2020 as part of the 'Light it Blue' campaign celebrating the 72nd anniversary of the NHS and its contributions during the COVID-19 pandemic. It was also lit green in April 2020 in recognition of social care workers. The bridge has been featured in film and on TV including the BBC TV drama 55 Degrees North and the British 2005 film Goal!. On 17 July 2005, Spencer Tunick used the bridge in an art installation: 1,700 people gathered together nude and were photographed around the Millennium and Tyne Bridges and the Baltic Centre for Contemporary Art. The bridge was pictured on a first-class stamp in 2000, and a pound coin depicting the bridge was produced by the Royal Mint in 2007. Awards Gateshead Millennium Bridge has won a total of 25 awards for design and lighting. For the construction of the bridge, the architect WilkinsonEyre won the 2002 Royal Institute of British Architects (RIBA) Stirling Prize. This was a somewhat controversial decision; although the RIBA judges described the bridge as a "truly heroic piece of engineering and construction", there was debate among the attendees of the awards ceremony as to whether it also counted as architecture, with some claiming that it was not a building. However, Jim Eyre of WilkinsonEyre argued that the feat did cross over into the boundary of architecture. WilkinsonEyre and Gifford also won the 2003 IStructE Supreme Award. The bridge was awarded the British Constructional Steelwork Association's Structural Steel Design Award in 2002. In 2005, the bridge received the Outstanding Structure Award from the International Association for Bridge and Structural Engineering.
Technology
Bridges
null
547693
https://en.wikipedia.org/wiki/Chthonian%20planet
Chthonian planet
Chthonian planets (, sometimes 'cthonian') are a hypothetical class of celestial objects resulting from the stripping away of a gas giant's hydrogen and helium atmosphere and outer layers, which is called hydrodynamic escape. Such atmospheric stripping is a likely result of proximity to a star. The remaining rocky or metallic core would resemble a terrestrial planet in many respects. Etymology Chthon (from ) means "earth". The term chthonian was coined by Hébrard et al. and generally refers to Greek chthonic deities from the infernal underground. Possible examples Transit-timing variation measurements indicate, for example, that Kepler-52b, Kepler-52c and Kepler-57b have maximum masses between 30 and 100 times the mass of Earth (although the actual masses could be much lower); with radii about two Earth radii, they might have densities larger than that of an iron planet of the same size. These exoplanets orbit very close to their stars and could be the remnant cores of evaporated gas giants or brown dwarfs. If cores are massive enough they could remain compressed for billions of years despite losing the atmospheric mass. As there is a lack of gaseous "hot-super-Earths" between 2.2 and 3.8 Earth-radii exposed to over 650 Earth incident flux, it is assumed that exoplanets below such radii exposed to such stellar fluxes could have had their envelopes stripped by photoevaporation. HD 209458 b HD 209458 b is an example of a gas giant that is in the process of having its atmosphere stripped away, though it will not become a chthonian planet for many billions of years, if ever. A similar case would be Gliese 436b, which has already lost 10% of its atmosphere. CoRoT-7b CoRoT-7b is the first exoplanet found that might be chthonian. Other researchers dispute this, and conclude CoRoT-7b was always a rocky planet and not the eroded core of a gas or ice giant, due to the young age of the star system. TOI-849 b In 2020, a high-density planet more massive than Neptune was found very close to its host star, within the Neptunian desert. This world, TOI-849 b, may very well be a chthonian planet.
Physical sciences
Planetary science
Astronomy
547980
https://en.wikipedia.org/wiki/Stone%20pine
Stone pine
The stone pine, botanical name Pinus pinea, also known as the Italian stone pine, Mediterranean stone pine, umbrella pine and parasol pine, is a tree from the pine family (Pinaceae). The tree is native to the Mediterranean region, occurring in Southern Europe and the Levant. The species was introduced into North Africa millennia ago, and is also naturalized in the Canary Islands, South Africa and New South Wales. Stone pines have been used and cultivated for their edible pine nuts since prehistoric times. They are widespread in horticultural cultivation as ornamental trees, planted in gardens and parks around the world. This plant has gained the Royal Horticultural Society's Award of Garden Merit. Pinus pinea is a diagnostic species of the vegetation class Pinetea halepensis. Description The stone pine is a coniferous evergreen tree that can exceed in height, but is more typical. In youth, it is a bushy globe, in mid-age an umbrella canopy on a thick trunk, and, in maturity, a broad and flat crown over in width. The bark is thick, red-brown and deeply fissured into broad vertical plates. Foliage The flexible mid-green leaves are needle-like, in bundles of two, and are long (exceptionally up to ). Young trees up to 5–10 years old bear juvenile leaves, which are very different, single (not paired), long, glaucous blue-green; the adult leaves appear mixed with juvenile leaves from the fourth or fifth year on, replacing it fully by around the tenth year. Juvenile leaves are also produced in regrowth following injury, such as a broken shoot, on older trees. The cones are broad, ovoid, long, and take 36 months to mature, longer than any other pine. The seeds (pine nuts, piñones, pinhões, pinoli, or pignons) are large, long, and pale brown with a powdery black coating that rubs off easily, and have a rudimentary wing that falls off very easily. The wing is ineffective for wind dispersal, and the seeds are animal-dispersed, originally mainly by the Iberian magpie, but in recent history largely by humans. Distribution and habitat The prehistoric range of Pinus pinea included North Africa in the Sahara Desert and Maghreb regions during a more humid climate period, in present-day Morocco, Algeria, Tunisia, and Libya. Its contemporary natural range is in the Mediterranean forests, woodlands, and scrub biome ecoregions and countries, including the following: Southern Europe The Iberian conifer forests ecoregion of the Iberian Peninsula in Spain and Portugal; the Italian sclerophyllous and semi-deciduous forests ecoregion in France and Italy; the Tyrrhenian-Adriatic sclerophyllous and mixed forests ecoregion of southern Italy, Sicily, and Sardinia; the Illyrian deciduous forests of the eastern coast of the Ionian and Adriatic Seas in Albania and Croatia; the Crimean Submediterranean forest complex ecoregion on Krasnodar Krai (Russia) and the Crimea Peninsula; and the Aegean and Western Turkey sclerophyllous and mixed forests ecoregion of the southern Balkan Peninsula in Greece. In many parts of northern Italy, large parks with pine trees were laid out by the sea. Examples are the Pineta of Jesolo and Barcola, the Urban Beach of Trieste. In Greece, although the species is not widely distributed, an extensive stone pine forest exists in western Peloponnese at Strofylia on the peninsula separating the Kalogria Lagoon from the Mediterranean Sea. This coastal forest is at least long, with dense and tall stands of Pinus pinea mixed with Pinus halepensis. Currently, P. halepensis is outcompeting stone pines in many locations of the forest. Another location in Greece is at Koukounaries on the northern Aegean island of Skiathos at the southwest corner of the island. This is a half-mile-long dense stand of stone and Aleppo pines that lies between a lagoon and the Aegean Sea. Western Asia In Western Asia, the Eastern Mediterranean conifer-sclerophyllous-broadleaf forests ecoregion in Turkey; and the Southern Anatolian montane conifer and deciduous forests ecoregion in Turkey, Syria, Lebanon, Israel and in the Palestinian Territories. Northern Africa The Mediterranean woodlands and forests ecoregion of North Africa, in Morocco and Algeria. South Africa In the Western Cape Province, the pines were according to legend planted by the French Huguenot refugees who settled at the Cape of Good Hope during the late 17th century and who brought the seeds with them from France. The tree is known in the Afrikaans language as kroonden. Pests The introduced Western conifer seed bug (Leptoglossus occidentalis) was accidentally imported with timber to northern Italy in the late 1990s from the western US, and has spread across Europe as an invasive pest species since then. It feeds on the sap of developing conifer cones throughout its life, and its sap-sucking causes the developing seeds to wither and misdevelop. It has destroyed most of the pine nut seeds in Italy, threatening P. pinea in its native habitats there. Pestalotiopsis pini (a genus of ascomycete fungi), was found as an emerging pathogen on Pinus pinea in Portugal. Evidence of shoot blight and stem necrosis were found in stone pine orchards and urban areas in 2020. The edible pine nut production has been decreasing in the affected area due to several factors, including pests and diseases. The fungus was found on needles, shoots and trunks of P. pinea and also on P. pinaster. Pestalotiopsis fungal species could represent a threat to the health of pine forests in the Mediterranean basin. Uses Food Pinus pinea has been cultivated extensively for at least 6,000 years for its edible pine nuts, which have been trade items since early historic times. The tree has been cultivated throughout the Mediterranean region for so long that it has naturalized, and is often considered native beyond its natural range. Ornamental The tree is among the current symbols of Rome. It was first planted in Rome during the Roman Republic, where many historic Roman roads, such as the Via Appia, were (and still are) embellished with lines of stone pines. Stone pines were planted on the hills of the Bosphorus strait in Istanbul for ornamental purposes during the Ottoman period. In Italy, the stone pine has been an aesthetic landscape element since the Italian Renaissance garden period. In the 1700s, P. pinea began being introduced as an ornamental tree to other Mediterranean climate regions of the world, and is now often found in gardens and parks in South Africa, California, and Australia. It has naturalized beyond cities in South Africa to the extent that it is listed as an invasive species there. It is also planted in western Europe up to southern Scotland, and on the East Coast of the United States up to New Jersey. In the United Kingdom it has won the Royal Horticultural Society's Award of Garden Merit. Small specimens are used for bonsai, and also grown in large pots and planters. The year-old seedlings are seasonally available as table-top Christmas trees tall. Other Other products of economic value include resin, bark for tannin extraction, and empty pine cone shells for fuel. Pinus pinea is also currently widely cultivated around the Mediterranean for environmental protection such as consolidation of coastal dunes, soil conservation and protection of coastal agricultural crops. Gallery
Biology and health sciences
Pinaceae
Plants
548173
https://en.wikipedia.org/wiki/Instability
Instability
In dynamical systems instability means that some of the outputs or internal states increase with time, without bounds. Not all systems that are not stable are unstable; systems can also be marginally stable or exhibit limit cycle behavior. In structural engineering, a structural beam or column can become unstable when excessive compressive load is applied. Beyond a certain threshold, structural deflections magnify stresses, which in turn increases deflections. This can take the form of buckling or crippling. The general field of study is called structural stability. Atmospheric instability is a major component of all weather systems on Earth. Instability in control systems In the theory of dynamical systems, a state variable in a system is said to be unstable if it evolves without bounds. A system itself is said to be unstable if at least one of its state variables is unstable. In continuous time control theory, a system is unstable if any of the roots of its characteristic equation has real part greater than zero (or if zero is a repeated root). This is equivalent to any of the eigenvalues of the state matrix having either real part greater than zero, or, for the eigenvalues on the imaginary axis, the algebraic multiplicity being larger than the geometric multiplicity. The equivalent condition in discrete time is that at least one of the eigenvalues is greater than 1 in absolute value, or that two or more eigenvalues are equal and of unit absolute value. Instability in solid mechanics Buckling Elastic instability Drucker stability of a nonlinear constitutive model Biot instability (surface wrinkling in elastomers) Baroclinic instability Fluid instabilities Fluid instabilities occur in liquids, gases and plasmas, and are often characterized by the shape that form; they are studied in fluid dynamics and magnetohydrodynamics. Fluid instabilities include: Ballooning instability (some analogy to the Rayleigh–Taylor instability); found in the magnetosphere Atmospheric instability Hydrodynamic instability or dynamic instability (atmospheric dynamics) Inertial instability; baroclinic instability; symmetric instability, conditional symmetric or convective symmetric instability; barotropic instability; Helmholtz or shearing instability; rotational instability Hydrostatic instability or static instability/vertical instability (parcel instability), thermodynamic instability (atmospheric thermodynamics) Conditional or static instability, buoyant instability, latent instability, nonlocal static instability, conditional-symmetric instability; convective, potential, or thermal instability, convective instability of the first and second kind; absolute or mechanical instability Bénard instability Drift mirror instability Kelvin–Helmholtz instability (similar, but different from the diocotron instability in plasmas) Rayleigh–Taylor instability Saffman–Taylor instability Plateau-Rayleigh instability (similar to the Rayleigh–Taylor instability) Richtmyer-Meshkov instability (similar to the Rayleigh–Taylor instability) Shock Wave Instability Benjamin-Feir Instability (also known as modulational instability) Plasma instabilities Plasma instabilities can be divided into two general groups (1) hydrodynamic instabilities (2) kinetic instabilities. Plasma instabilities are also categorised into different modes – see this paragraph in plasma stability. Instabilities of stellar systems Galaxies and star clusters can be unstable, if small perturbations in the gravitational potential cause changes in the density that reinforce the original perturbation. Such instabilities usually require that the motions of stars be highly correlated, so that the perturbation is not "smeared out" by random motions. After the instability has run its course, the system is typically "hotter" (the motions are more random) or rounder than before. Instabilities in stellar systems include: Bar instability of rapidly rotating disks Jeans instability Firehose instability Gravothermal instability Radial-orbit instability Various instabilities in cold rotating disks Joint instabilities The most common residual disability after any sprain in the body is instability. Mechanical instability includes insufficient stabilizing structures and mobility that exceed the physiological limits. Functional instability involves recurrent sprains or a feeling of giving way of the injured joint. Injuries cause proprioceptive deficits and impaired postural control in the joint. Individuals with muscular weakness, occult instability, and decreased postural control are more susceptible to injury than those with better postural control. Instability leads to an increase in postural sway, the measurement of the time and distance a subject spends away from an ideal center of pressure. The measurement of a subject's postural sway can be calculated through testing center of pressure (CoP), which is defined as the vertical projection of center of mass on the ground. Investigators have theorized that if injuries to joints cause deafferentation, the interruption of sensory nerve fibers, and functional instability, then a subject's postural sway should be altered. Joint stability can be enhanced by the use of an external support system, like a brace, to alter body mechanics. The mechanical support provided by a brace provides cutaneous afferent feedback in maintaining postural control and increasing stability.
Physical sciences
Physics basics: General
Physics
2021152
https://en.wikipedia.org/wiki/Firebrat
Firebrat
The firebrat (Thermobia domestica) is a small insect (typically 1–1.5 cm) in the order Zygentoma. Habitat Firebrats prefer relatively warm temperatures (36–39 °C) and require some humidity. They are commonly found indoors near heat sources such as furnaces and boilers. They feed on a wide variety of carbohydrates and starches that are also protein sources such as dog food, flour and book bindings. They are distributed throughout most parts of the world and are normally found outdoors under rocks, plant litter, and in similar environments, but are also often found indoors where they are considered pests. They do not cause major damage, but they can contaminate food, damage paper goods, and stain clothing. Otherwise they are mostly harmless. Behavior Firebrats utilize pheromones to lead other firebrats to attract one another and congregate. To maintain a group, firebrats must remain in contact with one another. Breeding At 1.5 to 4.5 months of age the female firebrat begins laying eggs if the temperature is right (32–41 °C or 90–106 °F). It may lay up to 6000 eggs in a lifetime of about 3–5 years. After incubation (12–13 days), the nymphs hatch. They may reach maturity in as little as 2–4 months, resulting in several generations each year. Meiosis The sequential changes occurring during the prophase I stage of meiosis in T. domestica ovaries have been described in detail.
Biology and health sciences
Zygentoma
Animals
2021419
https://en.wikipedia.org/wiki/Relativistic%20mechanics
Relativistic mechanics
In physics, relativistic mechanics refers to mechanics compatible with special relativity (SR) and general relativity (GR). It provides a non-quantum mechanical description of a system of particles, or of a fluid, in cases where the velocities of moving objects are comparable to the speed of light c. As a result, classical mechanics is extended correctly to particles traveling at high velocities and energies, and provides a consistent inclusion of electromagnetism with the mechanics of particles. This was not possible in Galilean relativity, where it would be permitted for particles and light to travel at any speed, including faster than light. The foundations of relativistic mechanics are the postulates of special relativity and general relativity. The unification of SR with quantum mechanics is relativistic quantum mechanics, while attempts for that of GR is quantum gravity, an unsolved problem in physics. As with classical mechanics, the subject can be divided into "kinematics"; the description of motion by specifying positions, velocities and accelerations, and "dynamics"; a full description by considering energies, momenta, and angular momenta and their conservation laws, and forces acting on particles or exerted by particles. There is however a subtlety; what appears to be "moving" and what is "at rest"—which is termed by "statics" in classical mechanics—depends on the relative motion of observers who measure in frames of reference. Some definitions and concepts from classical mechanics do carry over to SR, such as force as the time derivative of momentum (Newton's second law), the work done by a particle as the line integral of force exerted on the particle along a path, and power as the time derivative of work done. However, there are a number of significant modifications to the remaining definitions and formulae. SR states that motion is relative and the laws of physics are the same for all experimenters irrespective of their inertial reference frames. In addition to modifying notions of space and time, SR forces one to reconsider the concepts of mass, momentum, and energy all of which are important constructs in Newtonian mechanics. SR shows that these concepts are all different aspects of the same physical quantity in much the same way that it shows space and time to be interrelated. Consequently, another modification is the concept of the center of mass of a system, which is straightforward to define in classical mechanics but much less obvious in relativity – see relativistic center of mass for details. The equations become more complicated in the more familiar three-dimensional vector calculus formalism, due to the nonlinearity in the Lorentz factor, which accurately accounts for relativistic velocity dependence and the speed limit of all particles and fields. However, they have a simpler and elegant form in four-dimensional spacetime, which includes flat Minkowski space (SR) and curved spacetime (GR), because three-dimensional vectors derived from space and scalars derived from time can be collected into four vectors, or four-dimensional tensors. The six-component angular momentum tensor is sometimes called a bivector because in the 3D viewpoint it is two vectors (one of these, the conventional angular momentum, being an axial vector). Relativistic kinematics The relativistic four-velocity, that is the four-vector representing velocity in relativity, is defined as follows: In the above, is the proper time of the path through spacetime, called the world-line, followed by the object velocity the above represents, and is the four-position; the coordinates of an event. Due to time dilation, the proper time is the time between two events in a frame of reference where they take place at the same location. The proper time is related to coordinate time t by: where is the Lorentz factor: (either version may be quoted) so it follows: The first three terms, excepting the factor of , is the velocity as seen by the observer in their own reference frame. The is determined by the velocity between the observer's reference frame and the object's frame, which is the frame in which its proper time is measured. This quantity is invariant under Lorentz transformation, so to check to see what an observer in a different reference frame sees, one simply multiplies the velocity four-vector by the Lorentz transformation matrix between the two reference frames. Relativistic dynamics Rest mass and relativistic mass The mass of an object as measured in its own frame of reference is called its rest mass or invariant mass and is sometimes written . If an object moves with velocity in some other reference frame, the quantity is often called the object's "relativistic mass" in that frame. Some authors use to denote rest mass, but for the sake of clarity this article will follow the convention of using for relativistic mass and for rest mass. Lev Okun has suggested that the concept of relativistic mass "has no rational justification today" and should no longer be taught. Other physicists, including Wolfgang Rindler and T. R. Sandin, contend that the concept is useful. See mass in special relativity for more information on this debate. A particle whose rest mass is zero is called massless. Photons and gravitons are thought to be massless, and neutrinos are nearly so. Relativistic energy and momentum There are a couple of (equivalent) ways to define momentum and energy in SR. One method uses conservation laws. If these laws are to remain valid in SR they must be true in every possible reference frame. However, if one does some simple thought experiments using the Newtonian definitions of momentum and energy, one sees that these quantities are not conserved in SR. One can rescue the idea of conservation by making some small modifications to the definitions to account for relativistic velocities. It is these new definitions which are taken as the correct ones for momentum and energy in SR. The four-momentum of an object is straightforward, identical in form to the classical momentum, but replacing 3-vectors with 4-vectors: The energy and momentum of an object with invariant mass , moving with velocity with respect to a given frame of reference, are respectively given by The factor comes from the definition of the four-velocity described above. The appearance of may be stated in an alternative way, which will be explained in the next section. The kinetic energy, , is defined as and the speed as a function of kinetic energy is given by The spatial momentum may be written as , preserving the form from Newtonian mechanics with relativistic mass substituted for Newtonian mass. However, this substitution fails for some quantities, including force and kinetic energy. Moreover, the relativistic mass is not invariant under Lorentz transformations, while the rest mass is. For this reason, many people prefer to use the rest mass and account for explicitly through the 4-velocity or coordinate time. A simple relation between energy, momentum, and velocity may be obtained from the definitions of energy and momentum by multiplying the energy by , multiplying the momentum by , and noting that the two expressions are equal. This yields may then be eliminated by dividing this equation by and squaring, dividing the definition of energy by and squaring, and substituting: This is the relativistic energy–momentum relation. While the energy and the momentum depend on the frame of reference in which they are measured, the quantity is invariant. Its value is times the squared magnitude of the 4-momentum vector. The invariant mass of a system may be written as Due to kinetic energy and binding energy, this quantity is different from the sum of the rest masses of the particles of which the system is composed. Rest mass is not a conserved quantity in special relativity, unlike the situation in Newtonian physics. However, even if an object is changing internally, so long as it does not exchange energy or momentum with its surroundings, its rest mass will not change and can be calculated with the same result in any reference frame. Mass–energy equivalence The relativistic energy–momentum equation holds for all particles, even for massless particles for which m0 = 0. In this case: When substituted into Ev = c2p, this gives v = c: massless particles (such as photons) always travel at the speed of light. Notice that the rest mass of a composite system will generally be slightly different from the sum of the rest masses of its parts since, in its rest frame, their kinetic energy will increase its mass and their (negative) binding energy will decrease its mass. In particular, a hypothetical "box of light" would have rest mass even though made of particles which do not since their momenta would cancel. Looking at the above formula for invariant mass of a system, one sees that, when a single massive object is at rest (v = 0, p = 0), there is a non-zero mass remaining: m0 = E/c2. The corresponding energy, which is also the total energy when a single particle is at rest, is referred to as "rest energy". In systems of particles which are seen from a moving inertial frame, total energy increases and so does momentum. However, for single particles the rest mass remains constant, and for systems of particles the invariant mass remain constant, because in both cases, the energy and momentum increases subtract from each other, and cancel. Thus, the invariant mass of systems of particles is a calculated constant for all observers, as is the rest mass of single particles. The mass of systems and conservation of invariant mass For systems of particles, the energy–momentum equation requires summing the momentum vectors of the particles: The inertial frame in which the momenta of all particles sums to zero is called the center of momentum frame. In this special frame, the relativistic energy–momentum equation has p = 0, and thus gives the invariant mass of the system as merely the total energy of all parts of the system, divided by c2 This is the invariant mass of any system which is measured in a frame where it has zero total momentum, such as a bottle of hot gas on a scale. In such a system, the mass which the scale weighs is the invariant mass, and it depends on the total energy of the system. It is thus more than the sum of the rest masses of the molecules, but also includes all the totaled energies in the system as well. Like energy and momentum, the invariant mass of isolated systems cannot be changed so long as the system remains totally closed (no mass or energy allowed in or out), because the total relativistic energy of the system remains constant so long as nothing can enter or leave it. An increase in the energy of such a system which is caused by translating the system to an inertial frame which is not the center of momentum frame, causes an increase in energy and momentum without an increase in invariant mass. E = m0c2, however, applies only to isolated systems in their center-of-momentum frame where momentum sums to zero. Taking this formula at face value, we see that in relativity, mass is simply energy by another name (and measured in different units). In 1927 Einstein remarked about special relativity, "Under this theory mass is not an unalterable magnitude, but a magnitude dependent on (and, indeed, identical with) the amount of energy." Closed (isolated) systems In a "totally-closed" system (i.e., isolated system) the total energy, the total momentum, and hence the total invariant mass are conserved. Einstein's formula for change in mass translates to its simplest ΔE = Δmc2 form, however, only in non-closed systems in which energy is allowed to escape (for example, as heat and light), and thus invariant mass is reduced. Einstein's equation shows that such systems must lose mass, in accordance with the above formula, in proportion to the energy they lose to the surroundings. Conversely, if one can measure the differences in mass between a system before it undergoes a reaction which releases heat and light, and the system after the reaction when heat and light have escaped, one can estimate the amount of energy which escapes the system. Chemical and nuclear reactions In both nuclear and chemical reactions, such energy represents the difference in binding energies of electrons in atoms (for chemistry) or between nucleons in nuclei (in atomic reactions). In both cases, the mass difference between reactants and (cooled) products measures the mass of heat and light which will escape the reaction, and thus (using the equation) give the equivalent energy of heat and light which may be emitted if the reaction proceeds. In chemistry, the mass differences associated with the emitted energy are around 10−9 of the molecular mass. However, in nuclear reactions the energies are so large that they are associated with mass differences, which can be estimated in advance, if the products and reactants have been weighed (atoms can be weighed indirectly by using atomic masses, which are always the same for each nuclide). Thus, Einstein's formula becomes important when one has measured the masses of different atomic nuclei. By looking at the difference in masses, one can predict which nuclei have stored energy that can be released by certain nuclear reactions, providing important information which was useful in the development of nuclear energy and, consequently, the nuclear bomb. Historically, for example, Lise Meitner was able to use the mass differences in nuclei to estimate that there was enough energy available to make nuclear fission a favorable process. The implications of this special form of Einstein's formula have thus made it one of the most famous equations in all of science. Center of momentum frame The equation E = m0c2 applies only to isolated systems in their center of momentum frame. It has been popularly misunderstood to mean that mass may be converted to energy, after which the mass disappears. However, popular explanations of the equation as applied to systems include open (non-isolated) systems for which heat and light are allowed to escape, when they otherwise would have contributed to the mass (invariant mass) of the system. Historically, confusion about mass being "converted" to energy has been aided by confusion between mass and "matter", where matter is defined as fermion particles. In such a definition, electromagnetic radiation and kinetic energy (or heat) are not considered "matter". In some situations, matter may indeed be converted to non-matter forms of energy (see above), but in all these situations, the matter and non-matter forms of energy still retain their original mass. For isolated systems (closed to all mass and energy exchange), mass never disappears in the center of momentum frame, because energy cannot disappear. Instead, this equation, in context, means only that when any energy is added to, or escapes from, a system in the center-of-momentum frame, the system will be measured as having gained or lost mass, in proportion to energy added or removed. Thus, in theory, if an atomic bomb were placed in a box strong enough to hold its blast, and detonated upon a scale, the mass of this closed system would not change, and the scale would not move. Only when a transparent "window" was opened in the super-strong plasma-filled box, and light and heat were allowed to escape in a beam, and the bomb components to cool, would the system lose the mass associated with the energy of the blast. In a 21 kiloton bomb, for example, about a gram of light and heat is created. If this heat and light were allowed to escape, the remains of the bomb would lose a gram of mass, as it cooled. In this thought-experiment, the light and heat carry away the gram of mass, and would therefore deposit this gram of mass in the objects that absorb them. Angular momentum In relativistic mechanics, the time-varying mass moment and orbital 3-angular momentum of a point-like particle are combined into a four-dimensional bivector in terms of the 4-position X and the 4-momentum P of the particle: where ∧ denotes the exterior product. This tensor is additive: the total angular momentum of a system is the sum of the angular momentum tensors for each constituent of the system. So, for an assembly of discrete particles one sums the angular momentum tensors over the particles, or integrates the density of angular momentum over the extent of a continuous mass distribution. Each of the six components forms a conserved quantity when aggregated with the corresponding components for other objects and fields. Force In special relativity, Newton's second law does not hold in the form F = ma, but it does if it is expressed as where p = γ(v)m0v is the momentum as defined above and m0 is the invariant mass. Thus, the force is given by {| class="toccolours collapsible collapsed" width="80%" style="text-align:left" !Derivation |- | Starting from Carrying out the derivatives gives If the acceleration is separated into the part parallel to the velocity (a∥) and the part perpendicular to it (a⊥), so that: one gets By construction a∥ and v are parallel, so (v·a∥)v is a vector with magnitude v2a∥ in the direction of v (and hence a∥) which allows the replacement: then |} Consequently, in some old texts, γ(v)3m0 is referred to as the longitudinal mass, and γ(v)m0 is referred to as the transverse mass, which is numerically the same as the relativistic mass. See mass in special relativity. If one inverts this to calculate acceleration from force, one gets The force described in this section is the classical 3-D force which is not a four-vector. This 3-D force is the appropriate concept of force since it is the force which obeys Newton's third law of motion. It should not be confused with the so-called four-force which is merely the 3-D force in the comoving frame of the object transformed as if it were a four-vector. However, the density of 3-D force (linear momentum transferred per unit four-volume) is a four-vector (density of weight +1) when combined with the negative of the density of power transferred. Torque The torque acting on a point-like particle is defined as the derivative of the angular momentum tensor given above with respect to proper time: or in tensor components: where F is the 4d force acting on the particle at the event X. As with angular momentum, torque is additive, so for an extended object one sums or integrates over the distribution of mass. Kinetic energy The work-energy theorem says the change in kinetic energy is equal to the work done on the body. In special relativity: {| class="toccolours collapsible collapsed" width="80%" style="text-align:left" !Derivation |- | |} If in the initial state the body was at rest, so v0 = 0 and γ0(v0) = 1, and in the final state it has speed v1 = v, setting γ1(v1) = γ(v), the kinetic energy is then; a result that can be directly obtained by subtracting the rest energy m0c2 from the total relativistic energy γ(v)m0c2. Newtonian limit The Lorentz factor γ(v) can be expanded into a Taylor series or binomial series for (v/c)2 < 1, obtaining: and consequently For velocities much smaller than that of light, one can neglect the terms with c2 and higher in the denominator. These formulas then reduce to the standard definitions of Newtonian kinetic energy and momentum. This is as it should be, for special relativity must agree with Newtonian mechanics at low velocities.
Physical sciences
Basics_10
null
2024795
https://en.wikipedia.org/wiki/Mathematical%20formulation%20of%20the%20Standard%20Model
Mathematical formulation of the Standard Model
This article describes the mathematics of the Standard Model of particle physics, a gauge quantum field theory containing the internal symmetries of the unitary product group . The theory is commonly viewed as describing the fundamental set of particles – the leptons, quarks, gauge bosons and the Higgs boson. The Standard Model is renormalizable and mathematically self-consistent; however, despite having huge and continued successes in providing experimental predictions, it does leave some unexplained phenomena. In particular, although the physics of special relativity is incorporated, general relativity is not, and the Standard Model will fail at energies or distances where the graviton is expected to emerge. Therefore, in a modern field theory context, it is seen as an effective field theory. Quantum field theory The standard model is a quantum field theory, meaning its fundamental objects are quantum fields, which are defined at all points in spacetime. QFT treats particles as excited states (also called quanta) of their underlying quantum fields, which are more fundamental than the particles. These fields are the fermion fields, , which account for "matter particles"; the electroweak boson fields , , , and ; the gluon field, ; and the Higgs field, . That these are quantum rather than classical fields has the mathematical consequence that they are operator-valued. In particular, values of the fields generally do not commute. As operators, they act upon a quantum state (ket vector). Alternative presentations of the fields As is common in quantum theory, there is more than one way to look at things. At first the basic fields given above may not seem to correspond well with the "fundamental particles" in the chart above, but there are several alternative presentations that, in particular contexts, may be more appropriate than those that are given above. Fermions Rather than having one fermion field , it can be split up into separate components for each type of particle. This mirrors the historical evolution of quantum field theory, since the electron component (describing the electron and its antiparticle the positron) is then the original field of quantum electrodynamics, which was later accompanied by and fields for the muon and tauon respectively (and their antiparticles). Electroweak theory added , and for the corresponding neutrinos. The quarks add still further components. In order to be four-spinors like the electron and other lepton components, there must be one quark component for every combination of flavor and color, bringing the total to 24 (3 for charged leptons, 3 for neutrinos, and 2·3·3 = 18 for quarks). Each of these is a four component bispinor, for a total of 96 complex-valued components for the fermion field. An important definition is the barred fermion field , which is defined to be , where denotes the Hermitian adjoint of , and is the zeroth gamma matrix. If is thought of as an matrix then should be thought of as a matrix. A chiral theory An independent decomposition of is that into chirality components: where is the fifth gamma matrix. This is very important in the Standard Model because left and right chirality components are treated differently by the gauge interactions. In particular, under weak isospin SU(2) transformations the left-handed particles are weak-isospin doublets, whereas the right-handed are singlets – i.e. the weak isospin of is zero. Put more simply, the weak interaction could rotate e.g. a left-handed electron into a left-handed neutrino (with emission of a ), but could not do so with the same right-handed particles. As an aside, the right-handed neutrino originally did not exist in the standard model – but the discovery of neutrino oscillation implies that neutrinos must have mass, and since chirality can change during the propagation of a massive particle, right-handed neutrinos must exist in reality. This does not however change the (experimentally proven) chiral nature of the weak interaction. Furthermore, acts differently on and (because they have different weak hypercharges). Mass and interaction eigenstates A distinction can thus be made between, for example, the mass and interaction eigenstates of the neutrino. The former is the state that propagates in free space, whereas the latter is the different state that participates in interactions. Which is the "fundamental" particle? For the neutrino, it is conventional to define the "flavor" (, , or ) by the interaction eigenstate, whereas for the quarks we define the flavor (up, down, etc.) by the mass state. We can switch between these states using the CKM matrix for the quarks, or the PMNS matrix for the neutrinos (the charged leptons on the other hand are eigenstates of both mass and flavor). As an aside, if a complex phase term exists within either of these matrices, it will give rise to direct CP violation, which could explain the dominance of matter over antimatter in our current universe. This has been proven for the CKM matrix, and is expected for the PMNS matrix. Positive and negative energies Finally, the quantum fields are sometimes decomposed into "positive" and "negative" energy parts: . This is not so common when a quantum field theory has been set up, but often features prominently in the process of quantizing a field theory. Bosons Due to the Higgs mechanism, the electroweak boson fields , , , and "mix" to create the states that are physically observable. To retain gauge invariance, the underlying fields must be massless, but the observable states can gain masses in the process. These states are: The massive neutral (Z) boson: The massless neutral boson: The massive charged W bosons: where is the Weinberg angle. The field is the photon, which corresponds classically to the well-known electromagnetic four-potential – i.e. the electric and magnetic fields. The field actually contributes in every process the photon does, but due to its large mass, the contribution is usually negligible. Perturbative QFT and the interaction picture Much of the qualitative descriptions of the standard model in terms of "particles" and "forces" comes from the perturbative quantum field theory view of the model. In this, the Lagrangian is decomposed as into separate free field and interaction Lagrangians. The free fields care for particles in isolation, whereas processes involving several particles arise through interactions. The idea is that the state vector should only change when particles interact, meaning a free particle is one whose quantum state is constant. This corresponds to the interaction picture in quantum mechanics. In the more common Schrödinger picture, even the states of free particles change over time: typically the phase changes at a rate that depends on their energy. In the alternative Heisenberg picture, state vectors are kept constant, at the price of having the operators (in particular the observables) be time-dependent. The interaction picture constitutes an intermediate between the two, where some time dependence is placed in the operators (the quantum fields) and some in the state vector. In QFT, the former is called the free field part of the model, and the latter is called the interaction part. The free field model can be solved exactly, and then the solutions to the full model can be expressed as perturbations of the free field solutions, for example using the Dyson series. It should be observed that the decomposition into free fields and interactions is in principle arbitrary. For example, renormalization in QED modifies the mass of the free field electron to match that of a physical electron (with an electromagnetic field), and will in doing so add a term to the free field Lagrangian which must be cancelled by a counterterm in the interaction Lagrangian, that then shows up as a two-line vertex in the Feynman diagrams. This is also how the Higgs field is thought to give particles mass: the part of the interaction term that corresponds to the nonzero vacuum expectation value of the Higgs field is moved from the interaction to the free field Lagrangian, where it looks just like a mass term having nothing to do with the Higgs field. Free fields Under the usual free/interaction decomposition, which is suitable for low energies, the free fields obey the following equations: The fermion field satisfies the Dirac equation; for each type of fermion. The photon field satisfies the wave equation . The Higgs field satisfies the Klein–Gordon equation. The weak interaction fields satisfy the Proca equation. These equations can be solved exactly. One usually does so by considering first solutions that are periodic with some period along each spatial axis; later taking the limit: will lift this periodicity restriction. In the periodic case, the solution for a field (any of the above) can be expressed as a Fourier series of the form where: is a normalization factor; for the fermion field it is , where is the volume of the fundamental cell considered; for the photon field it is . The sum over is over all momenta consistent with the period , i.e., over all vectors where are integers. The sum over covers other degrees of freedom specific for the field, such as polarization or spin; it usually comes out as a sum from to or from to . is the relativistic energy for a momentum quantum of the field, when the rest mass is . and are annihilation and creation operators respectively for "a-particles" and "b-particles" respectively of momentum ; "b-particles" are the antiparticles of "a-particles". Different fields have different "a-" and "b-particles". For some fields, and are the same. and are non-operators that carry the vector or spinor aspects of the field (where relevant). is the four-momentum for a quantum with momentum . denotes an inner product of four-vectors. In the limit , the sum would turn into an integral with help from the hidden inside . The numeric value of also depends on the normalization chosen for and . Technically, is the Hermitian adjoint of the operator in the inner product space of ket vectors. The identification of and as creation and annihilation operators comes from comparing conserved quantities for a state before and after one of these have acted upon it. can for example be seen to add one particle, because it will add to the eigenvalue of the a-particle number operator, and the momentum of that particle ought to be since the eigenvalue of the vector-valued momentum operator increases by that much. For these derivations, one starts out with expressions for the operators in terms of the quantum fields. That the operators with are creation operators and the one without annihilation operators is a convention, imposed by the sign of the commutation relations postulated for them. An important step in preparation for calculating in perturbative quantum field theory is to separate the "operator" factors and above from their corresponding vector or spinor factors and . The vertices of Feynman graphs come from the way that and from different factors in the interaction Lagrangian fit together, whereas the edges come from the way that the s and s must be moved around in order to put terms in the Dyson series on normal form. Interaction terms and the path integral approach The Lagrangian can also be derived without using creation and annihilation operators (the "canonical" formalism) by using a path integral formulation, pioneered by Feynman building on the earlier work of Dirac. Feynman diagrams are pictorial representations of interaction terms. A quick derivation is indeed presented at the article on Feynman diagrams. Lagrangian formalism We can now give some more detail about the aforementioned free and interaction terms appearing in the Standard Model Lagrangian density. Any such term must be both gauge and reference-frame invariant, otherwise the laws of physics would depend on an arbitrary choice or the frame of an observer. Therefore, the global Poincaré symmetry, consisting of translational symmetry, rotational symmetry and the inertial reference frame invariance central to the theory of special relativity must apply. The local gauge symmetry is the internal symmetry. The three factors of the gauge symmetry together give rise to the three fundamental interactions, after some appropriate relations have been defined, as we shall see. Kinetic terms A free particle can be represented by a mass term, and a kinetic term that relates to the "motion" of the fields. Fermion fields The kinetic term for a Dirac fermion is where the notations are carried from earlier in the article. can represent any, or all, Dirac fermions in the standard model. Generally, as below, this term is included within the couplings (creating an overall "dynamical" term). Gauge fields For the spin-1 fields, first define the field strength tensor for a given gauge field (here we use ), with gauge coupling constant . The quantity is the structure constant of the particular gauge group, defined by the commutator where are the generators of the group. In an abelian (commutative) group (such as the we use here) the structure constants vanish, since the generators all commute with each other. Of course, this is not the case in general – the standard model includes the non-Abelian and groups (such groups lead to what is called a Yang–Mills gauge theory). We need to introduce three gauge fields corresponding to each of the subgroups . The gluon field tensor will be denoted by , where the index labels elements of the representation of color SU(3). The strong coupling constant is conventionally labelled (or simply where there is no ambiguity). The observations leading to the discovery of this part of the Standard Model are discussed in the article in quantum chromodynamics. The notation will be used for the gauge field tensor of where runs over the generators of this group. The coupling can be denoted or again simply . The gauge field will be denoted by . The gauge field tensor for the of weak hypercharge will be denoted by , the coupling by , and the gauge field by . The kinetic term can now be written as where the traces are over the and indices hidden in and respectively. The two-index objects are the field strengths derived from and the vector fields. There are also two extra hidden parameters: the theta angles for and . Coupling terms The next step is to "couple" the gauge fields to the fermions, allowing for interactions. Electroweak sector The electroweak sector interacts with the symmetry group , where the subscript L indicates coupling only to left-handed fermions. where is the gauge field; is the weak hypercharge (the generator of the group); is the three-component gauge field; and the components of are the Pauli matrices (infinitesimal generators of the group) whose eigenvalues give the weak isospin. Note that we have to redefine a new symmetry of weak hypercharge, different from QED, in order to achieve the unification with the weak force. The electric charge , third component of weak isospin (also called or ) and weak hypercharge are related by (or by the alternative convention ). The first convention, used in this article, is equivalent to the earlier Gell-Mann–Nishijima formula. It makes the hypercharge be twice the average charge of a given isomultiplet. One may then define the conserved current for weak isospin as and for weak hypercharge as where is the electric current and the third weak isospin current. As explained above, these currents mix to create the physically observed bosons, which also leads to testable relations between the coupling constants. To explain this in a simpler way, we can see the effect of the electroweak interaction by picking out terms from the Lagrangian. We see that the SU(2) symmetry acts on each (left-handed) fermion doublet contained in , for example where the particles are understood to be left-handed, and where This is an interaction corresponding to a "rotation in weak isospin space" or in other words, a transformation between and via emission of a boson. The symmetry, on the other hand, is similar to electromagnetism, but acts on all "weak hypercharged" fermions (both left- and right-handed) via the neutral , as well as the charged fermions via the photon. Quantum chromodynamics sector The quantum chromodynamics (QCD) sector defines the interactions between quarks and gluons, with symmetry, generated by . Since leptons do not interact with gluons, they are not affected by this sector. The Dirac Lagrangian of the quarks coupled to the gluon fields is given by where and are the Dirac spinors associated with up and down-type quarks, and other notations are continued from the previous section. Mass terms and the Higgs mechanism Mass terms The mass term arising from the Dirac Lagrangian (for any fermion ) is , which is not invariant under the electroweak symmetry. This can be seen by writing in terms of left and right-handed components (skipping the actual calculation): i.e. contribution from and terms do not appear. We see that the mass-generating interaction is achieved by constant flipping of particle chirality. The spin-half particles have no right/left chirality pair with the same representations and equal and opposite weak hypercharges, so assuming these gauge charges are conserved in the vacuum, none of the spin-half particles could ever swap chirality, and must remain massless. Additionally, we know experimentally that the W and Z bosons are massive, but a boson mass term contains the combination e.g. , which clearly depends on the choice of gauge. Therefore, none of the standard model fermions or bosons can "begin" with mass, but must acquire it by some other mechanism. Higgs mechanism The solution to both these problems comes from the Higgs mechanism, which involves scalar fields (the number of which depend on the exact form of Higgs mechanism) which (to give the briefest possible description) are "absorbed" by the massive bosons as degrees of freedom, and which couple to the fermions via Yukawa coupling to create what looks like mass terms. In the Standard Model, the Higgs field is a complex scalar field of the group : where the superscripts and indicate the electric charge () of the components. The weak hypercharge () of both components is . The Higgs part of the Lagrangian is where and , so that the mechanism of spontaneous symmetry breaking can be used. There is a parameter here, at first hidden within the shape of the potential, that is very important. In a unitarity gauge one can set and make real. Then is the non-vanishing vacuum expectation value of the Higgs field. has units of mass, and it is the only parameter in the Standard Model that is not dimensionless. It is also much smaller than the Planck scale and about twice the Higgs mass, setting the scale for the mass of all other particles in the Standard Model. This is the only real fine-tuning to a small nonzero value in the Standard Model. Quadratic terms in and arise, which give masses to the W and Z bosons: The mass of the Higgs boson itself is given by Yukawa interaction The Yukawa interaction terms are where , , and are matrices of Yukawa couplings, with the term giving the coupling of the generations and , and h.c. means Hermitian conjugate of preceding terms. The fields and are left-handed quark and lepton doublets. Likewise, , and are right-handed up-type quark, down-type quark, and lepton singlets. Finally is the Higgs doublet and Neutrino masses As previously mentioned, evidence shows neutrinos must have mass. But within the standard model, the right-handed neutrino does not exist, so even with a Yukawa coupling neutrinos remain massless. An obvious solution is to simply add a right-handed neutrino , which requires the addition of a new Dirac mass term in the Yukawa sector: This field however must be a sterile neutrino, since being right-handed it experimentally belongs to an isospin singlet () and also has charge , implying (see above) i.e. it does not even participate in the weak interaction. The experimental evidence for sterile neutrinos is currently inconclusive. Another possibility to consider is that the neutrino satisfies the Majorana equation, which at first seems possible due to its zero electric charge. In this case a new Majorana mass term is added to the Yukawa sector: where denotes a charge conjugated (i.e. anti-) particle, and the terms are consistently all left (or all right) chirality (note that a left-chirality projection of an antiparticle is a right-handed field; care must be taken here due to different notations sometimes used). Here we are essentially flipping between left-handed neutrinos and right-handed anti-neutrinos (it is furthermore possible but not necessary that neutrinos are their own antiparticle, so these particles are the same). However, for left-chirality neutrinos, this term changes weak hypercharge by 2 units – not possible with the standard Higgs interaction, requiring the Higgs field to be extended to include an extra triplet with weak hypercharge = 2 – whereas for right-chirality neutrinos, no Higgs extensions are necessary. For both left and right chirality cases, Majorana terms violate lepton number, but possibly at a level beyond the current sensitivity of experiments to detect such violations. It is possible to include both Dirac and Majorana mass terms in the same theory, which (in contrast to the Dirac-mass-only approach) can provide a “natural” explanation for the smallness of the observed neutrino masses, by linking the right-handed neutrinos to yet-unknown physics around the GUT scale (see seesaw mechanism). Since in any case new fields must be postulated to explain the experimental results, neutrinos are an obvious gateway to searching physics beyond the Standard Model. Detailed information This section provides more detail on some aspects, and some reference material. Explicit Lagrangian terms are also provided here. Field content in detail The Standard Model has the following fields. These describe one generation of leptons and quarks, and there are three generations, so there are three copies of each fermionic field. By CPT symmetry, there is a set of fermions and antifermions with opposite parity and charges. If a left-handed fermion spans some representation its antiparticle (right-handed antifermion) spans the dual representation (note that for SU(2), because it is pseudo-real). The column "representation" indicates under which representations of the gauge groups that each field transforms, in the order (SU(3), SU(2), U(1)) and for the U(1) group, the value of the weak hypercharge is listed. There are twice as many left-handed lepton field components as right-handed lepton field components in each generation, but an equal number of left-handed quark and right-handed quark field components. Fermion content This table is based in part on data gathered by the Particle Data Group. Free parameters Upon writing the most general Lagrangian with massless neutrinos, one finds that the dynamics depend on 19 parameters, whose numerical values are established by experiment. Straightforward extensions of the Standard Model with massive neutrinos need 7 more parameters (3 masses and 4 PMNS matrix parameters) for a total of 26 parameters. The neutrino parameter values are still uncertain. The 19 certain parameters are summarized here. The choice of free parameters is somewhat arbitrary. In the table above, gauge couplings are listed as free parameters, therefore with this choice the Weinberg angle is not a free parameter – it is defined as . Likewise, the fine-structure constant of QED is . Instead of fermion masses, dimensionless Yukawa couplings can be chosen as free parameters. For example, the electron mass depends on the Yukawa coupling of the electron to the Higgs field, and its value is . Instead of the Higgs mass, the Higgs self-coupling strength , which is approximately 0.129, can be chosen as a free parameter. Instead of the Higgs vacuum expectation value, the parameter directly from the Higgs self-interaction term can be chosen. Its value is , or approximately = . The value of the vacuum energy (or more precisely, the renormalization scale used to calculate this energy) may also be treated as an additional free parameter. The renormalization scale may be identified with the Planck scale or fine-tuned to match the observed cosmological constant. However, both options are problematic. Additional symmetries of the Standard Model From the theoretical point of view, the Standard Model exhibits four additional global symmetries, not postulated at the outset of its construction, collectively denoted accidental symmetries, which are continuous U(1) global symmetries. The transformations leaving the Lagrangian invariant are: The first transformation rule is shorthand meaning that all quark fields for all generations must be rotated by an identical phase simultaneously. The fields and are the 2nd (muon) and 3rd (tau) generation analogs of and fields. By Noether's theorem, each symmetry above has an associated conservation law: the conservation of baryon number, electron number, muon number, and tau number. Each quark is assigned a baryon number of , while each antiquark is assigned a baryon number of . Conservation of baryon number implies that the number of quarks minus the number of antiquarks is a constant. Within experimental limits, no violation of this conservation law has been found. Similarly, each electron and its associated neutrino is assigned an electron number of +1, while the anti-electron and the associated anti-neutrino carry a −1 electron number. Similarly, the muons and their neutrinos are assigned a muon number of +1 and the tau leptons are assigned a tau lepton number of +1. The Standard Model predicts that each of these three numbers should be conserved separately in a manner similar to the way baryon number is conserved. These numbers are collectively known as lepton family numbers (LF). (This result depends on the assumption made in Standard Model that neutrinos are massless. Experimentally, neutrino oscillations imply that individual electron, muon and tau numbers are not conserved.) In addition to the accidental (but exact) symmetries described above, the Standard Model exhibits several approximate symmetries. These are the "SU(2) custodial symmetry" and the "SU(2) or SU(3) quark flavor symmetry". U(1) symmetry For the leptons, the gauge group can be written . The two factors can be combined into , where is the lepton number. Gauging of the lepton number is ruled out by experiment, leaving only the possible gauge group . A similar argument in the quark sector also gives the same result for the electroweak theory. Charged and neutral current couplings and Fermi theory The charged currents are These charged currents are precisely those that entered the Fermi theory of beta decay. The action contains the charge current piece For energy much less than the mass of the W-boson, the effective theory becomes the current–current contact interaction of the Fermi theory, . However, gauge invariance now requires that the component of the gauge field also be coupled to a current that lies in the triplet of SU(2). However, this mixes with the , and another current in that sector is needed. These currents must be uncharged in order to conserve charge. So neutral currents are also required, The neutral current piece in the Lagrangian is then Physics beyond the Standard Model
Physical sciences
Particle physics: General
Physics
2025047
https://en.wikipedia.org/wiki/Aplacophora
Aplacophora
Aplacophora is a possibly paraphyletic taxon. This is a class of small, deep-water, exclusively benthic, marine molluscs found in all oceans of the world. All known modern forms are shell-less: only some extinct primitive forms possessed valves. The group comprises the two clades Solenogastres (Neomeniomorpha) and Caudofoveata (Chaetodermomorpha), which between them contain 28 families and about 320 species. The aplacophorans are traditionally considered ancestral to the other mollusc classes. However, the relationship between the two aplacophoran groups and to the other molluscan classes and to each other is as yet unclear. Aplacophorans are cylindrical and worm-like in form, and most very small, being no longer than ; some species, however, can reach a length of . Habitat Caudofoveates generally burrow into the substrate while solenogasters are usually epibenthic. Both taxa are most common in water regions deeper than where some species may reach densities up to four or five specimens per square metre (three or four per square yard). Solenogasters are typically carnivores feeding on cnidarians or sometimes annelids or other taxa while caudofoveates are mostly detritovores or feed on foraminiferans. Description Aplacophorans are worm-like animals, with little resemblance to most other molluscs. They have no shell, although small calcified spicules are embedded in the skin; these spicules are occasionally coated with an organic pellicle that is presumably secreted by microvilli. Caudofoveates lack a foot while solenogasters have a narrow foot which lacks intrinsic musculature. The mantle cavity is reduced into a simple cloaca, into which the anus and excretory organs empty, and is located at the posterior of the animal. The head is rudimentary, and has no eyes or tentacles. The cuticle of both subclasses is chitinous, and has an irregular texture. Spines bear an organic matrix. Sclerites can be hollow or solid; the void within the sclerites of some species fills during growth. Sclerites generally form within the cuticle, protruding through when they are fully grown. They probably start life as amorphous calcium carbonate, which the organic matrix coaxes into an aragonitic habit as the spines mature. Sclerites can be shaped as simple spines, straight, curved, keeled, striated or hooked; or as cupped blades; more complex arrangements are common in copulatory spicules. In several species of solenogaster, sclerites change morphology during growth; young specimens might bear flat, solid, scale-like sclerites, to be replaced with longer hollow spine-like sclerites in adults. The relationship with other molluscs, however, is apparent from some features of the digestive system; aplacophorans possess both a radula and a style. A variety of radular forms and functions exist. Solenogasters are hermaphroditic and assumed to have internal fertilization, in contrast to caudofoveates which have two sexes, and reproduce by external fertilization. During development, the mantle cavity of the larva curls up and closes, creating the worm-like form of the adult. Caudofoveates also differ from Solenogasters in having a head shield and a body that is differentiated into three sections. Taxonomy and evolution This class was once classified as sea cucumbers in the echinoderms. In 1987, they were officially recognized as molluscs and given their own class. It consists of two clades: the Solenogastres and the Caudofoveata. It has been considered polyphyletic but more recent molecular evidence supports it as a monophyletic clade. The affinities of Aplacophorans have long been uncertain. Molecular and fossil evidence seemed to put Aplacophorans in the clade Aculifera, as a sister group to Polyplacophora. The discovery of Kulindroplax in 2012, a fossil aplacophoran with a polyplacophoran-like armour, strongly supports this hypothesis, and shows that aplacophorans evolved from progenitors that bore valves.
Biology and health sciences
Mollusks
Animals
2025919
https://en.wikipedia.org/wiki/Pattern%20%28sewing%29
Pattern (sewing)
In sewing and fashion design, a pattern is the template from which the parts of a garment are traced onto woven or knitted fabrics before being cut out and assembled. Patterns are usually made of paper, and are sometimes made of sturdier materials like paperboard or cardboard if they need to be more robust to withstand repeated use. The process of making or cutting patterns is sometimes compounded to the one-word patternmaking, but it can also be written pattern making or pattern cutting. A sloper pattern, also called a block pattern, is a custom-fitted, basic pattern from which patterns for many different styles can be developed. The process of changing the size of a finished pattern is called pattern grading. Several companies, like Butterick and Simplicity, specialize in selling pre-graded patterns directly to consumers who will sew the patterns at home. These patterns are usually printed on tissue paper and include multiple sizes that overlap each other. An illustrated instruction sheet for use and assembly of the item is usually included. The pattern may include multiple style options in one package. Commercial clothing manufacturers make their own patterns in-house as part of their design and production process, usually employing at least one specialized patternmaker. In bespoke clothing, slopers and patterns must be developed for each client, while for commercial production, patterns will be made to fit several standard body sizes. Pattern making A patternmaker typically employs one of two methods to create a pattern. The flat-pattern method is where the entire pattern is drafted on a flat surface from measurements, using rulers, curves, and straight-edges. A pattern maker would also use various tools such as a notcher, drill, and awl to mark the pattern. Usually, flat patterning begins with the creation of a "sloper" or "block" pattern: a simple, fitted garment made to the wearer's measurements. For women, this will usually be a jewel-neck bodice and narrow skirt, and for men, an upper sloper and a pants sloper. The final sloper pattern is usually made of cardboard or paperboard, without seam allowances or style details (thicker paper or cardboard allows repeated tracing and pattern development from the original sloper). Once the shape of the sloper has been refined by making a series of mock-up garments called toiles (UK) or muslins (US) or Nessel in German, the final sloper can be used to create patterns for many styles of garments with varying necklines, sleeves, dart placements, and so on. The flat pattern drafting method is the most commonly used method in menswear; menswear rarely involves draping. The draping method involves creating a mock-up pattern made of a strong fabric (such as calico) in a linen weave. The fabric is far coarser than muslin, but less coarse and thick than canvas or denim. However, it is still very cheap, owing to its unfinished and undyed appearance. Then, by pinning this fabric directly on a form, the fabric outline and markings will be then transferred onto a paper pattern, or the fabric itself will be used as the pattern. Designers drafting a sculpted evening gown or dress which uses a lot of fabric--typically cut on the bias--will use the draping technique, as it is very difficult to achieve this with a flat pattern. This method is also used for collars. Each pattern manufacturer has their own size ranges. A distinction is made between a basic pattern, a first pattern, and a production pattern. Patternmakers grade the first cuts to the desired size with the aid of CAD software (computer-aided design). The production pattern must contain all the information necessary for production and all the necessary parts. The collections are produced in sets of sizes. The customer has the garment altered by a tailor after purchase, if necessary. Pattern digitizing After a paper/fabric pattern is completed, very often patternmakers digitize their patterns for archiving and vendor communication purposes. The previous standard for digitizing was the digitizing tablet. Nowadays, automatic options such as scanners and camera systems are available. Fitting patterns Mass market patterns are made standardized, so store-bought patterns fit most people well. Experienced dressmakers can adjust standard patterns to better fit any body shape. A sewer may choose a standard size (usually from the wearer's bust measurement) that has been pre-graded on a purchased pattern. They may decide to tailor or adjust a pattern to improve the fit or style for the garment wearer by using French curves, hip curves, and cutting or folding on straight edges. There are alternate methods of adjusting a pattern, either directly on flat pattern pieces from the wearer's measurements, using a pre-draped personalized sloper, or using draping methods on a dress form with inexpensive fabrics like muslin. Some dress forms are adjustable to match the wearer's unique measurements, and the muslin is fit around the form accordingly. By taking it in or letting it out, a smaller or larger fit can be made from the original pattern. Creating a sample from canvas is another method of making patterns. Canvas fabric is inexpensive, not elastic and made from Urticaceae. It is easy to work with when making quick adjustments, by pinning the fabric around the wearer or a dress form. The sewer cuts the pieces using the same method that they will use for the actual garment, according to a pattern. The pieces are then fit together and darts and other adjustments are made. This provides the sewer with measurements to use as a guideline for marking the patterns and cutting the fabric for the finished garment. Pattern grading Pattern grading is the process of shrinking or enlarging a finished pattern to accommodate it to people of different sizes. Grading rules determine how patterns increase or decrease to create different sizes. Fabric type also influences pattern grading standards. The cost of pattern grading is incomplete without considering marker making. Parametric pattern drafting Parametric pattern drafting implies using a program algorithm to draft patterns for every individual size from scratch, using size measurements, variables and geometric objects. Standard pattern symbols Sewing patterns typically include standard symbols and marks that guide the cutter and/or sewer in cutting and assembling the pieces of the pattern. Patterns may use: Notches, to indicate: Seam allowances. (not all patterns include allowances) Centerlines and other lines important to the fit like the waistline, hip, breast, shoulder tip, etc. Zipper placement Fold point for folded hems and facings Matched points, especially for long or curving seams or seams with ease. For example, the Armscye will usually be notched at the point where ease should begin to be added to the sleeve cap. There is usually no ease through the underarm. Circular holes, perhaps made by an awl or circular punch, to indicate: A dart apex Corners, as they are stitched, i.e. without seam allowances Pocket placement, or the placement of other details like trimming Buttonholes and buttons A long arrow, drawn on top of the pattern, to indicate: Grainline, or how the pattern should be aligned with the fabric. The arrow is meant to be aligned parallel to the straight grain of the fabric. A long arrow with arrowheads at both ends indicates that either of two orientations is possible. An arrow with one head probably indicates that the fabric has a direction to it which needs to be considered, such as a pattern which should face up when the wearer is standing. Double lines indicating where the pattern may be lengthened or shortened for a different fit Dot, triangle, or square symbols, to provide "match points" for adjoining pattern pieces, similar to putting puzzle pieces together Many patterns will also have full outlines for some features, like for a patch pocket, making it easier to visualize how things go together. Patterns for commercial clothing manufacture The making of industrial patterns begins with an existing block pattern that most closely resembles the designer's vision. Patterns are cut of oak tag (manila folder) paper, punched with a hole and stored by hanging with a special hook. The pattern is first checked for accuracy, then it is cut out of sample fabrics and the resulting garment is fit-tested. Once the pattern meets the designer's approval, a small production run of selling samples is made and the style is presented to buyers in wholesale markets. If the style has demonstrated sales potential, the pattern is graded for sizes, usually by computer with an apparel industry specific CAD program. There are a wide variety of pattern making and grading/marker making programs, each with their own features. Following grading, the pattern must be vetted; the accuracy of each size and the direct comparison in laying seam lines is done. After these steps have been followed and any errors corrected, the pattern is approved for production. When the manufacturing company is ready to manufacture the style, all of the sizes of each given pattern piece are arranged into a marker, usually by computer. A marker is an arrangement of all of the pattern pieces over the area of the fabric to be cut that minimizes fabric waste while maintaining the desired grainlines. It's sort of like a pattern of patterns from which all pieces will be cut. The marker is then laid on top of the layers of fabric and cut. Commercial markers often include multiple sets of patterns for popular sizes. For example: one set of size Small, two sets of size Medium and one set of size Large. Once the style has been sold and delivered to stores – and if it proves to be quite popular – the pattern of this style will itself become a block, with subsequent generations of patterns developed from it. Standard designing and adjusting tools Hip curve L-Square French curves Pattern notcher Dress forms Slopers - Bodice, skirt, trousers, etc. Retail patterns Home sewing patterns are generally printed on tissue paper and sold in packets containing sewing instructions and suggestions for fabric and trim. They are also available over the Internet as downloadable files. Home sewers can print the patterns at home or take the electronic file to a business that does copying and printing. Many pattern companies distribute sewing patterns as electronic files as an alternative to, or in place of, pre-printed packets, which the home sewer can print at home or take to a local copyshop, as they include large format printing versions. Modern patterns are available in a wide range of prices, sizes, styles, and sewing skill levels, to meet the needs of consumers. The majority of modern-day home sewing patterns contain multiple sizes in one pattern. Once a pattern is removed from a package, you can either cut the pattern based on the size you will be making or you can preserve the pattern by tracing it. The pattern is traced onto fabric using one of several methods. In one method, tracing paper with transferable ink on one side is placed between the pattern and the fabric. A tracing wheel is moved over the pattern outlines, transferring the markings onto the fabric with ink that is removable by erasing or washing. In another method, tracing paper is laid directly over a purchased pattern, and the pieces are traced. The pieces are cut, then the tracing paper is pinned and/or basted to the fabric. The fabric can then be cut to match the outlines on the tracing paper. Vintage patterns may come with small holes pre-punched into the pattern paper. These are for creating tailor's tacks, a type of basting where thread is sewn into the fabric in short lengths to serve as a guideline for cutting and assembling fabric pieces. Besides illustrating the finished garment, pattern envelopes typically include charts for sizing, the number of pieces included in a pattern, and suggested fabrics and necessary sewing notions and supplies. Ebenezer Butterick invented the commercially produced graded home sewing pattern in 1863 (based on grading systems used by Victorian tailors), originally selling hand-drawn patterns for men's and boys' clothing. In 1866, Butterick added patterns for women's clothing, which remains the heart of the home sewing pattern market today. Gallery
Technology
Other techniques
null
22782965
https://en.wikipedia.org/wiki/Voay
Voay
Voay is an extinct genus of crocodile from Madagascar that lived during the Late Pleistocene to Holocene, containing only one species, V. robustus. Numerous subfossils have been found, including complete skulls, noted for their distinctive pair of horns on the posterior, as well as vertebrae and osteoderms from such places as Ambolisatra and Antsirabe. The genus is thought to have become extinct relatively recently. It has been suggested to have disappeared in the extinction event that wiped out much of the endemic megafauna on Madagascar, such as the elephant bird and Malagasy hippo, following the arrival of humans to Madagascar around 2000 years ago. Its name comes from the Malagasy word for crocodile. Description One unusual feature of V. robustus that distinguishes it from other crocodilians is the presence of prominent "horns" extending from the posterior portion of the skull. They are actually the posterolaterally extended corners of the squamosal bone. Other related crocodilians such as Aldabrachampsus also had similar bony projections, although in Aldabrachampsus these projections were more like crests than horns. Another diagnostic characteristic is the near-exclusion of the nasals from the external naris. It had a shorter and deeper snout than the extant Crocodylus niloticus, as well as relatively robust limbs. The osteoderms had tall keels and were dorsally symmetrical with curved lateral margins, running the entire length of the postcranial body. V. robustus would have measured around long and weighed about . These estimates suggest that V. robustus was the largest predator to have existed in Madagascar in recent times. Its size, stature, and presumed behavior is similar to the modern Nile crocodile (Crocodylus niloticus). Because V. robustus shared so many similarities with the Nile crocodile there must have been a great deal of interspecies competition for resources between the two crocodile genera if they were to have coexisted with one another. It has recently been proposed that the Nile crocodile only migrated to the island from mainland Africa after V. robustus had become extinct in Madagascar. However, this was subsequently disproved after some Crocodylus specimens from Madagascar were found to be at least 7,500 years old and contemporaneous with Voay. Phylogenetics When V. robustus was first described in 1872, it was originally assigned to the genus Crocodylus. It was later found to morphologically have had more in common with the extant Osteolaemus, or dwarf crocodile, than Crocodylus. Some features it shared with Osteolaemus include a depressed pterygoid surface that forms a choanal "neck" on the palate. Because it was not close enough to be placed in the same genus as the dwarf crocodile, it was assigned to the new genus in 2007. Before this reassignment, the species was considered by some to be synonymous with Crocodylus niloticus. However, this was most likely due to a misinterpretation of remains from the living C. niloticus with V. robustus and the poor description of the original material from which the species was described. In contrast to the morphological similarities with Osteolaemus, a 2021 study using paleogenomics found Voay to be a sister group to Crocodylus, with both genera diverging in the mid-late Oligocene; this indicates that the apparent similarities with Osteolaemus are likely due to convergent evolution. The below cladogram shows the results of the latest study:
Biology and health sciences
Prehistoric crocodiles
Animals
22785535
https://en.wikipedia.org/wiki/Ken%20%28unit%29
Ken (unit)
The is a traditional Japanese unit of length, equal to six Japanese feet (shaku). The exact value has varied over time and location but has generally been a little shorter than . It is now standardized as 1.82 m. Although mostly supplanted by the metric system, this unit is a common measurement in Japanese architecture, where it is used as a proportion for the intervals between the pillars of traditional-style buildings. In this context, it is commonly translated as "bay". The length also appears in other contexts, such as the standard length of the bō staff in Japanese martial arts and the standard dimensions of the tatami mats. As these are used to cover the floors of most Japanese houses, floor surfaces are still commonly measured not in square meters but in "tatami" which are equivalent to half of a square ken. Word Among English loanwords of Japanese origin, both ken and ma are derived from readings of the same character . This kanji graphically combines "door" and "sun". The earlier variant character was written with "moon" rather than "sun", depicting "A door through the crevice of which the moonshine peeps in". The diverse Japanese pronunciations of include on'yomi Sino-Chinese readings (from jian or "room; between; gap; interval") of kan "interval; space; between; among; discord; favorable opportunity" or ken "six feet"; and kun'yomi native Japanese readings of ai "interval; between; medium; crossbred", aida or awai "space; interval; gap; between; among; midway; on the way; distance; time; period; relationship", or ma "space; room; interval; pause; rest (in music); time; a while; leisure; luck; timing; harmony". History The ken is based on the Chinese jian. It uses the same Chinese character as the Korean kan. A building's proportions were (and, to a certain extent, still are) measured in ken, as for example in the case of Enryaku-ji's Konponchū-dō (Main Hall), which measures 11×6 bays (37.60 m × 23.92 m), of which 11×4 are dedicated to the worshipers. Inside buildings, available space was often divided in squares measuring one ken across, and each square was then called a , the term written with the same Chinese character as ken. Traditional buildings usually measure an odd number of bays, for example 3×3 or 5×5. A type of temple's gate called rōmon can have dimensions going from 5×2 bays to the more common 3×2 bays down to even 1×1 bay. The Zen butsuden in the illustration measures 5×5 ken across externally because its 3×3 ken core (moya) is surrounded by a 1-ken aisle called hisashi. The value of a ken could change from building to building, but was usually kept constant within the same structure. There can however be exceptions. Kasuga Taisha's tiny honden'''s dimensions, for example, are 1×1 in ken, but 1.9×2.6 in meters. In the case of Izumo Taisha's honden, a ken is , well above its standard value. The distance between pillars was standardized very early and started being used as a unit of measurement. Land area in particular was measured using the ken as a basis. The unit was born out of the necessity to measure land surface to calculate taxes. At the time of Toyotomi Hideyoshi (16th century), the ken was about , but around 1650 the Tokugawa shogunate reduced it to specifically to increase taxes. After the Edo period, the ken started to be called .Iwanami Kōjien
Physical sciences
East Asian
Basics and measurement
35607283
https://en.wikipedia.org/wiki/Superfluidity
Superfluidity
Superfluidity is the characteristic property of a fluid with zero viscosity which therefore flows without any loss of kinetic energy. When stirred, a superfluid forms vortices that continue to rotate indefinitely. Superfluidity occurs in two isotopes of helium (helium-3 and helium-4) when they are liquefied by cooling to cryogenic temperatures. It is also a property of various other exotic states of matter theorized to exist in astrophysics, high-energy physics, and theories of quantum gravity. The theory of superfluidity was developed by Soviet theoretical physicists Lev Landau and Isaak Khalatnikov. Superfluidity often co-occurs with Bose–Einstein condensation, but neither phenomenon is directly related to the other; not all Bose–Einstein condensates can be regarded as superfluids, and not all superfluids are Bose–Einstein condensates. Superfluids have some potential practical uses, such as dissolving substances in a quantum solvent. Superfluidity of liquid helium Superfluidity was discovered in helium-4 by Pyotr Kapitsa and independently by John F. Allen and Don Misener in 1937. Onnes possibly observed the superfluid phase transition on August 2 1911, the same day that he observed superconductivity in mercury. It has since been described through phenomenology and microscopic theories. In liquid helium-4, the superfluidity occurs at far higher temperatures than it does in helium-3. Each atom of helium-4 is a boson particle, by virtue of its integer spin. A helium-3 atom is a fermion particle; it can form bosons only by pairing with another particle like itself, which occurs at much lower temperatures. The discovery of superfluidity in helium-3 was the basis for the award of the 1996 Nobel Prize in Physics. This process is similar to the electron pairing in superconductivity. Cold atomic gases Superfluidity in an ultracold fermionic gas was experimentally proven by Wolfgang Ketterle and his team who observed quantum vortices in lithium-6 at a temperature of 50 nK at MIT in April 2005. Such vortices had previously been observed in an ultracold bosonic gas using rubidium-87 in 2000, and more recently in two-dimensional gases. As early as 1999, Lene Hau created such a condensate using sodium atoms for the purpose of slowing light, and later stopping it completely. Her team subsequently used this system of compressed light to generate the superfluid analogue of shock waves and tornadoes: Superfluids in astrophysics The idea that superfluidity exists inside neutron stars was first proposed by Arkady Migdal. By analogy with electrons inside superconductors forming Cooper pairs because of electron–lattice interaction, it is expected that nucleons in a neutron star at sufficiently high density and low temperature can also form Cooper pairs because of the long-range attractive nuclear force and lead to superfluidity and superconductivity. In high-energy physics and quantum gravity Superfluid vacuum theory (SVT) is an approach in theoretical physics and quantum mechanics where the physical vacuum is viewed as superfluid. The ultimate goal of the approach is to develop scientific models that unify quantum mechanics (describing three of the four known fundamental interactions) with gravity. This makes SVT a candidate for the theory of quantum gravity and an extension of the Standard Model. It is hoped that development of such a theory would unify into a single consistent model of all fundamental interactions, and to describe all known interactions and elementary particles as different manifestations of the same entity, superfluid vacuum. On the macro-scale a larger similar phenomenon has been suggested as happening in the murmurations of starlings. The rapidity of change in flight patterns mimics the phase change leading to superfluidity in some liquid states. Light behaves like a superfluid in various applications such as Poisson's Spot. As the liquid helium shown above, light will travel along the surface of an obstacle before continuing along its trajectory. Since light is not affected by local gravity its "level" becomes its own trajectory and velocity. Another example is how a beam of light travels through the hole of an aperture and along its backside before diffraction.
Physical sciences
States of matter
Physics
902683
https://en.wikipedia.org/wiki/Dental%20braces
Dental braces
Dental braces (also known as orthodontic braces, or simply braces) are devices used in orthodontics that align and straighten teeth and help position them with regard to a person's bite, while also aiming to improve dental health. They are often used to correct underbites, as well as malocclusions, overbites, open bites, gaps, deep bites, cross bites, crooked teeth, and various other flaws of the teeth and jaw. Braces can be either cosmetic or structural. Dental braces are often used in conjunction with other orthodontic appliances to help widen the palate or jaws and to otherwise assist in shaping the teeth and jaws. Process The application of braces moves the teeth as a result of force and pressure on the teeth. Traditionally, four basic elements are used: brackets, bonding material, arch wire, and ligature elastic (also called an "O-ring"). The teeth move when the arch wire puts pressure on the brackets and teeth. Sometimes springs or rubber bands are used to put more force in a specific direction. Braces apply constant pressure which, over time, moves teeth into the desired positions. The process loosens the tooth after which new bone grows to support the tooth in its new position. This is called bone remodelling. Bone remodelling is a biomechanical process responsible for making bones stronger in response to sustained load-bearing activity and weaker in the absence of carrying a load. Bones are made of cells called osteoclasts and osteoblasts. Two different kinds of bone resorption are possible: direct resorption, which starts from the lining cells of the alveolar bone, and indirect or retrograde resorption, which occurs when the periodontal ligament has been subjected to an excessive amount and duration of compressive stress. Another important factor associated with tooth movement is bone deposition. Bone deposition occurs in the distracted periodontal ligament. Without bone deposition, the tooth will loosen, and voids will occur distal to the direction of tooth movement. Types Traditional metal wired braces (also known as "train track braces") are stainless-steel and are sometimes used in combination with titanium. Traditional metal braces are the most common type of braces. These braces have a metal bracket with elastic ties (also known as rubber bands) holding the wire onto the metal brackets. The second-most common type of braces is self-ligating braces, which have a built-in system to secure the archwire to the brackets and do not require elastic ties. Instead, the wire goes through the bracket. Often with this type of braces, treatment time is reduced, there is less pain on the teeth, and fewer adjustments are required than with traditional braces. Gold-plated stainless steel braces are often employed for patients allergic to nickel (a basic and important component of stainless steel), but may also be chosen for aesthetic reasons. Lingual braces are a cosmetic alternative in which custom-made braces are bonded to the back of the teeth making them externally invisible. Titanium braces resemble stainless-steel braces but are lighter and just as strong. People with allergies to nickel in steel often choose titanium braces, but they are more expensive than stainless steel braces. Customized orthodontic treatment systems combine high technology including 3-D imaging, treatment planning software and a robot to custom bend the wire. Customized systems such as this offer faster treatment times and more efficient results. Progressive, clear removable aligners may be used to gradually move teeth into their final positions. Aligners are generally not used for complex orthodontic cases, such as when extractions, jaw surgery, or palate expansion are necessary. Fitting procedure Orthodontic services may be provided by any licensed dentist trained in orthodontics. In North America, most orthodontic treatment is done by orthodontists, who are dentists in the diagnosis and treatment of malocclusions—malalignments of the teeth, jaws, or both. A dentist must complete 2–3 years of additional post-doctoral training to earn a specialty certificate in orthodontics. There are many general practitioners who also provide orthodontic services. The first step is to determine whether braces are suitable for the patient. The doctor consults with the patient and inspects the teeth visually. If braces are appropriate, a records appointment is set up where X-rays, moulds, and impressions are made. These records are analyzed to determine the problems and the proper course of action. The use of digital models is rapidly increasing in the orthodontic industry. Digital treatment starts with the creation of a three-dimensional digital model of the patient's arches. This model is produced by laser-scanning plaster models created using dental impressions. Computer-automated treatment simulation has the ability to automatically separate the gums and teeth from one another and can handle malocclusions well; this software enables clinicians to ensure, in a virtual setting, that the selected treatment will produce the optimal outcome, with minimal user input. Typical treatment times vary from six months to two and a half years depending on the complexity and types of problems. Orthognathic surgery may be required in extreme cases. About 2 weeks before the braces are applied, orthodontic spacers may be required to spread apart back teeth in order to create enough space for the bands. Teeth to be braced will have an adhesive applied to help the cement bond to the surface of the tooth. In most cases, the teeth will be banded and then brackets will be added. A bracket will be applied with dental cement, and then cured with light until hardened. This process usually takes a few seconds per tooth. If required, orthodontic spacers may be inserted between the molars to make room for molar bands to be placed at a later date. Molar bands are required to ensure brackets will stick. Bands are also utilized when dental fillings or other dental works make securing a bracket to a tooth infeasible. Orthodontic tubes (stainless steel tubes that allow wires to pass through them), also known as molar tubes, are directly bonded to molar teeth either by a chemical curing or a light curing adhesive. Usually, molar tubes are directly welded to bands, which is a metal ring that fits onto the molar tooth. Directly bonded molar tubes are associated with a higher failure rate when compared to molar bands cemented with glass ionomer cement. Failure of orthodontic brackets, bonded tubes or bands will increase the overall treatment time for the patient. There is evidence suggesting that there is less enamel decalcification associated with molar bands cemented with glass ionomer cement compared with orthodontic tubes directly cemented to molars using a light cured adhesive. Further evidence is needed to withdraw a more robust conclusion due to limited data. An archwire will be threaded between the brackets and affixed with elastic or metal ligatures. Ligatures are available in a wide variety of colours, and the patient can choose which colour they like. Arch wires are bent, shaped, and tightened frequently to achieve the desired results. Modern orthodontics makes frequent use of nickel-titanium archwires and temperature-sensitive materials. When cold, the archwire is limp and flexible, easily threaded between brackets of any configuration. Once heated to body temperature, the arch wire will stiffen and seek to retain its shape, creating constant light force on the teeth. Brackets with hooks can be placed, or hooks can be created and affixed to the arch wire to affix rubber bands. The placement and configuration of the rubber bands will depend on the course of treatment and the individual patient. Rubber bands are made in different diameters, colours, sizes, and strengths. They are also typically available in two versions: Coloured or clear/opaque. The fitting process can vary between different types of braces, though there are similarities such as the initial steps of moulding the teeth before application. For example, with clear braces, impressions of a patient's teeth are evaluated to create a series of trays, which fit to the patient's mouth almost like a protective mouthpiece. With some forms of braces, the brackets are placed in a special form that is customized to the patient's mouth, drastically reducing the application time. In many cases, there is insufficient space in the mouth for all the teeth to fit properly. There are two main procedures to make room in these cases. One is extraction: teeth are removed to create more space. The second is expansion, in which the palate or arch is made larger by using a palatal expander. Expanders can be used with both children and adults. Since the bones of adults are already fused, expanding the palate is not possible without surgery to separate them. An expander can be used on an adult without surgery but would be used to expand the dental arch, and not the palate. Sometimes children and teenage patients, and occasionally adults, are required to wear a headgear appliance as part of the primary treatment phase to keep certain teeth from moving (for more detail on headgear and facemask appliances see Orthodontic headgear). When braces put pressure on one's teeth, the periodontal membrane stretches on one side and is compressed on the other. This movement needs to be done slowly or otherwise, the patient risks losing their teeth. This is why braces are worn as long as they are and adjustments are only made every so often. Braces are typically adjusted every three to six weeks. This helps shift the teeth into the correct position. When they get adjusted, the orthodontist removes the coloured or metal ligatures keeping the arch wire in place. The arch wire is then removed and may be replaced or modified. When the archwire has been placed back into the mouth, the patient may choose a colour for the new elastic ligatures, which are then affixed to the metal brackets. The adjusting process may cause some discomfort to the patient, which is normal. Post-treatment Patients may need post-orthodontic surgery, such as a fiberotomy or alternatively a gum lift, to prepare their teeth for retainer use and improve the gumline contours after the braces come off. After braces treatment, patients can use a transparent plate to keep the teeth in alignment for a certain period of time. After treatment, patients usually use transparent plates for 6 months. In patients with long and difficult treatment, a fixative wire is attached to the back of the teeth to prevent the teeth from returning to their original state. Retainers In order to prevent the teeth from moving back to their original position, retainers are worn once the treatment is complete. Retainers help in maintaining and stabilizing the position of teeth long enough to permit the reorganization of the supporting structures after the active phase of orthodontic therapy. If the patient does not wear the retainer appropriately and/or for the right amount of time, the teeth may move towards their previous position. For regular braces, Hawley retainers are used. They are made of metal hooks that surround the teeth and are enclosed by an acrylic plate shaped to fit the patient's palate. For Clear Removable braces, an Essix retainer is used. This is similar to the original aligner; it is a clear plastic tray that is firmly fitted to the teeth and stays in place without a plate fitted to the palate. There is also a bonded retainer where a wire is permanently bonded to the lingual side of the teeth, usually the lower teeth only. Headgear Headgear needs to be worn between 12 and 22 hours each day to be effective in correcting the overbite, typically for 12 to 18 months depending on the severity of the overbite, how much it is worn and what growth stage the patient is in. Typically the prescribed daily wear time will be between 14 and 16 hours a day and is frequently used as a post-primary treatment phase to maintain the position of the jaw and arch. Headgear can be used during the night while the patient sleeps. Orthodontic headgear usually consists of three major components: Facebow: the facebow (or J-Hooks) is fitted with a metal arch onto headgear tubes attached to the rear upper and lower molars. This facebow then extends out of the mouth and around the patient's face. J-Hooks are different in that they hook into the patient's mouth and attach directly to the brace (see photo for an example of J-Hooks). Head cap: the head cap typically consists of one or a number of straps fitting around the patient's head. This is attached with elastic bands or springs to the facebow. Additional straps and attachments are used to ensure comfort and safety (see photo). Attachment: typically consisting of rubber bands, elastics, or springs—joins the facebow or J-Hooks and the head cap together, providing the force to move the upper teeth, jaw backwards. The headgear application is one of the most useful appliances available to the orthodontist when looking to correct a Class II malocclusion. See more details in the section Orthodontic headgear. Pre-finisher The pre-finisher is moulded to the patient's teeth by use of extreme pressure on the appliance by the person's jaw. The product is then worn a certain amount of time with the user applying force to the appliance in their mouth for 10 to 15 seconds at a time. The goal of the process is to increase the exercise time in applying the force to the appliance. If a person's teeth are not ready for a proper retainer the orthodontist may prescribe the use of a preformed finishing appliance such as the pre-finisher. This appliance fixes gaps between the teeth, small spaces between the upper and lower jaw, and other minor problems. Complications and risks A group of dental researchers, Fatma Boke, Cagri Gazioglu, Selvi Akkaya, and Murat Akkaya, conducted a study titled "Relationship between orthodontic treatment and gingival health." The results indicated that some orthodontist treatments result in gingivitis, also known as gum disease. The researchers concluded that functional appliances used to harness natural forces (such as improving the alignment of bites) do not usually have major effects on the gum after treatment. However, fixed appliances such as braces, which most people get, can result in visible plaque, visible inflammation, and gum recession in a majority of the patients. The formation of plaques around the teeth of patients with braces is almost inevitable regardless of plaque control and can result in mild gingivitis. But if someone with braces does not clean their teeth carefully, plaques will form, leading to more severe gingivitis and gum recession. Experiencing some pain following fitting and activation of fixed orthodontic braces is very common and several methods have been suggested to tackle this. Pain associated with orthodontic treatment increases in proportion to the amount of force that is applied to the teeth. When a force is applied to a tooth via a brace, there is a reduction in the blood supply to the fibres that attach the tooth to the surrounding bone. This reduction in blood supply results in inflammation and the release of several chemical factors, which stimulate the pain response. Orthodontic pain can be managed using pharmacological interventions, which involve the use of analgesics applied locally or systemically. These analgesics are divided into four main categories, including opioids, non-steroidal anti-inflammatory drugs (NSAIDs), paracetamol and local anesthesia. The first three of these analgesics are commonly taken systemically to reduce orthodontic pain. A Cochrane Review in 2017 evaluated the pharmacological interventions for pain relief during orthodontic treatment. The study concluded that there was moderate-quality evidence that analgesics reduce the pain associated with orthodontic treatment. However, due to a lack of evidence, it was unclear whether systemic NSAIDs were more effective than paracetamol, and whether topical NSAIDs were more effective than local anaesthesia in the reduction of pain associated with orthodontic treatment. More high-quality research is required to investigate these particular comparisons. The dental displacement obtained with the orthodontic appliance determines in most cases some degree of root resorption. Only in a few cases is this side effect large enough to be considered real clinical damage to the tooth. In rare cases, the teeth may fall out or have to be extracted due to root resorption. History Ancient According to scholars and historians, braces date back to ancient times. Around 400–300 BC, Hippocrates and Aristotle contemplated ways to straighten teeth and fix various dental conditions. Archaeologists have discovered numerous mummified ancient individuals with what appear to be metal bands wrapped around their teeth. Catgut, a type of cord made from the natural fibres of an animal's intestines, performed a similar role to today's orthodontic wire in closing gaps in the teeth and mouth. The Etruscans buried their dead with dental appliances in place to maintain space and prevent the collapse of the teeth during the afterlife. A Roman tomb was found with a number of teeth bound with gold wire documented as a ligature wire, a small elastic wire that is used to affix the arch wire to the bracket. Even Cleopatra wore a pair. Roman philosopher and physician Aulus Cornelius Celsus first recorded the treatment of teeth by finger pressure. Unfortunately, due to a lack of evidence, poor preservation of bodies, and primitive technology, little research was carried out on dental braces until around the 17th century, although dentistry was making great advancements as a profession by then. 18th century Orthodontics truly began developing in the 18th and 19th centuries. In 1669, French dentist Pierre Fauchard, who is often credited with inventing modern orthodontics, published a book entitled "The Surgeon Dentist" on methods of straightening teeth. Fauchard, in his practice, used a device called a "Bandeau", a horseshoe-shaped piece of iron that helped expand the palate. In 1754, another French dentist, Louis Bourdet, dentist to the King of France, followed Fauchard's book with The Dentist's Art, which also dedicated a chapter to tooth alignment and application. He perfected the "Bandeau" and was the first dentist on record to recommend extraction of the premolar teeth to alleviate crowding and improve jaw growth. 19th century Although teeth and palate straightening and/or pulling were used to improve the alignment of remaining teeth and had been practised since early times, orthodontics, as a science of its own, did not really exist until the mid-19th century. Several important dentists helped to advance dental braces with specific instruments and tools that allowed braces to be improved. In 1819, Christophe François Delabarre introduced the wire crib, which marked the birth of contemporary orthodontics, and gum elastics were first employed by Maynard in 1843. Tucker was the first to cut rubber bands from rubber tubing in 1850. Dentist, writer, artist, and sculptor Norman William Kingsley in 1858 wrote the first article on orthodontics and in 1880, his book, Treatise on Oral Deformities, was published. A dentist named John Nutting Farrar is credited for writing two volumes entitled, A Treatise on the Irregularities of the Teeth and Their Corrections and was the first to suggest the use of mild force at timed intervals to move teeth. 20th century In the early 20th century, Edward Angle devised the first simple classification system for malocclusions, such as Class I, Class II, and so on. His classification system is still used today as a way for dentists to describe how crooked teeth are, what way teeth are pointing, and how teeth fit together. Angle contributed greatly to the design of orthodontic and dental appliances, making many simplifications. He founded the first school and college of orthodontics, organized the American Society of Orthodontia in 1901 which became the American Association of Orthodontists (AAO) in the 1930s, and founded the first orthodontic journal in 1907. Other innovations in orthodontics in the late 19th and early 20th centuries included the first textbook on orthodontics for children, published by J.J. Guilford in 1889, and the use of rubber elastics, pioneered by Calvin S. Case, along with Henry Albert Baker. Today, space age wires (also known as dental arch wires) are used to tighten braces. In 1959, the Naval Ordnance Laboratory created an alloy of nickel and titanium called Nitinol. NASA further studied the material's physical properties. In 1979, Dr. George Andreasen developed a new method of fixing braces with the use of the Nitinol wires based on their superelasticity. Andreasen used the wire on some patients and later found out that he could use it for the entire treatment. Andreasen then began using the nitinol wires for all his treatments and as a result, dental doctor visits were reduced, the cost of dental treatment was reduced, and patients reported less discomfort.
Biology and health sciences
Dental treatments
Health
902982
https://en.wikipedia.org/wiki/Slug%20%28unit%29
Slug (unit)
The slug is a derived unit of mass in a weight-based system of measures, most notably within the British Imperial measurement system and the United States customary measures system. Systems of measure either define mass and derive a force unit or define a base force and derive a mass unit (cf. poundal, a derived unit of force in a mass-based system). A slug is defined as a mass that is accelerated by 1 ft/s2 when a net force of one pound (lbf) is exerted on it. One slug is a mass equal to based on standard gravity, the international foot, and the avoirdupois pound. In other words, at the Earth's surface (in standard gravity), an object with a mass of 1 slug weighs approximately . History The slug is part of a subset of units known as the gravitational FPS system, one of several such specialized systems of mechanical units developed in the late 19th and the early 20th century. Geepound was another name for this unit in early literature. The name "slug" was coined before 1900 by British physicist Arthur Mason Worthington, but it did not see any significant use until decades later. It is derived from the meaning "solid block of metal" (cf. "slug" fake coin or "slug" projectile), not from the slug mollusc. A 1928 textbook says: The slug is listed in the Regulations under the Weights and Measures (National Standards) Act, 1960. This regulation defines the units of weights and measures, both regular and metric, in Australia. Related units The inch version of the slug (equal to 1 lbf⋅s2/in, or 12slugs) has no official name, but is commonly referred to as a blob, slinch (a portmanteau of the words slug and inch), slugette, or snail. It is equivalent to based on standard gravity. Similar (but long-obsolete) metric units included the glug (980.665 g) in a gravitational system related to the centimetre–gram–second system, and the mug, hyl, par, or TME (, 9.80665 kg) in a gravitational system related to the metre–kilogram–second system.
Physical sciences
Mass and weight
Basics and measurement
904386
https://en.wikipedia.org/wiki/Addax
Addax
The addax (Addax nasomaculatus), also known as the white antelope and the screwhorn antelope, is an antelope native to the Sahara Desert. The only member of the genus Addax, it was first described scientifically by Henri de Blainville in 1816. As suggested by its alternative name, the pale antelope has long, twisted horns – typically in females and in males. Males stand from at the shoulder, with females at . They are sexually dimorphic, as the females are smaller than the males. The colour of the coat depends on the season – in the winter, it is greyish-brown with white hindquarters and legs, and long, brown hair on the head, neck, and shoulders; in the summer, the coat turns almost completely white or sandy blonde. The addax mainly eats grasses and leaves of any available shrubs, leguminous herbs and bushes. They are well-adapted to exist in their desert habitat, as they can live without water for long periods of time. Addax form herds of five to 20 members, consisting of both males and females. The herd is usually led by one dominant male. Due to its slow movements, the addax is an easy target for its predators: humans, lions, leopards, cheetahs and African wild dogs. Breeding season is at its peak during winter and early spring. The natural habitat of the addax are arid regions, semideserts and sandy and stony deserts. The addax is a critically endangered species of antelope, as classified by the IUCN. Although extremely rare in its native habitat due to unregulated hunting, it is quite common in captivity. The addax was once abundant in North Africa; however it is currently only native to Chad, Mauritania, and Niger. It is extirpated from Algeria, Egypt, Libya, Sudan, and Western Sahara, but has been reintroduced into Morocco and Tunisia. Taxonomy and naming The scientific name of the addax is Addax nasomaculatus. This antelope was first described by French zoologist and anatomist Henri Blainville in 1816. It is placed in the monotypic genus Addax and the family Bovidae. Henri Blainville observed syntypes in Bullock's Pantherion and the Museum of the Royal College of Surgeons. English naturalist Richard Lydekker stated their type locality to be probably Senegambia, though he did not have anything to support the claim. Finally, from a discussion in 1898, it became more probable that British hunters or collectors obtained the addax from the part of the Sahara in Tunisia. The generic name Addax is thought to be obtained from an Arabic word meaning a wild animal with crooked horns. It is also thought to have originated from a Latin word. The name was first used in 1693. The specific name nasomaculatus comes from the Latin words nasus (or the prefix naso-) meaning nose, and maculatus meaning spotted, referring to the spots and facial markings of the species. Bedouins use another name for the addax, the Arabic bakr (or bagr) al wahsh, which literally means "the cow of the wild". That name can be used to refer to other ungulates as well. The other common names of addax are "white antelope" and "screwhorn antelope". Genetics The addax has 29 pairs of chromosomes. All chromosomes are acrocentric except for the first pair of autosomes, which are submetacentric. The X chromosome is the largest of the acrocentric chromosomes, and the Y chromosome is medium-sized. The short and long arms of the pair of submetacentric autosomes correspond respectively to the 27th and 1st chromosomes in cattle and goats. In a study, the banding patterns of chromosomes in addax were found to be similar to those in four other species of the subfamily Hippotraginae. History and fossil record In ancient times, the addax occurred from northern Africa through Arabia and the Levant. Pictures in a tomb, dating back to 2500 BCE, show at least the partial domestication of the addax by the ancient Egyptians. These pictures show addax and some other antelopes tied with ropes to stakes. The number of addax captured by a person were considered an indicator of his high social and economic position in the society. The pygarg ("white-buttocked") beast mentioned in Deuteronomy 14:5 is believed by Henry Baker Tristram to have been an addax. But today, excess poaching has resulted in the extinction of this species in Egypt since the 1960s. Addax fossils have been found in four sites of Egypt – a 7000 BCE fossil from the Great Sand Sea, a 5000–6000 BCE fossil from Djara, a 4000–7000 BCE fossil from Abu Ballas Stufenmland and a 5000 BCE fossil from Gilf Kebir. Apart from these, fossils have also been excavated from Mittleres Wadi Howar (6300 BCE fossil), and Pleistocene fossils from Grotte Neandertaliens, Jebel Irhoud and Parc d'Hydra. Physical description The addax is a spiral-horned antelope. Male addaxes stand from at the shoulder, with females at . They are sexually dimorphic, as the females are smaller than the males. The head and body length in both sexes is , with a long tail. The weight of males varies from , and that of females from . The coloring of the addax's coat varies with the season. In the winter, it is greyish-brown with white hindquarters and legs, and long, brown hair on the head, neck, and shoulders. In the summer, the coat turns almost completely white or sandy blonde. Their head is marked with brown or black patches that form an 'X' over their noses. They have scraggly beards and prominent red nostrils. Long, black hairs stick out between their curved and spiralling horns, ending in a short mane on the neck. The horns, which are found on both males and females, have two to three twists and are typically in females and in males, although the maximum recorded length is . The lower and middle portions of the horns are marked with a series of 30 to 35 ring-shaped ridges. The tail is short and slender, ending in a puff of black hair. The hooves are broad with flat soles and strong dewclaws to help them walk on soft sand. All four feet possess scent glands. The life span of the addax is up to 19 years in the wild, which can be extended to 25 years under captivity. The addax closely resembles the scimitar oryx, but can be distinguished by its horns and facial markings. While the addax is spiral-horned, the scimitar oryx has decurved long horns. The addax has a brown hair tuft extending from the base of its horns to between its eyes. A white patch, continuing from the brown hair, extends until the middle of the cheek. On the other hand, the scimitar oryx has a white forehead with only a notable brown marking: a brown lateral stripe across its eyes. It differs from other antelopes by having large, square teeth like cattle and lacking the typical facial glands. Addaxes in Souss-Massa National Park, Morocco Parasites The addax is most prone to parasites in moist climatic conditions. Addaxes have always been infected with nematodes in the Trichostrongyloidea and Strongyloidea superfamilies. In an exotic ranch in Texas, an addax was found host to the nematodes Haemonchus contortus and Longistrongylus curvispiculum in its abomasum, of which the former was dominant. Behavior and ecology Addax herds contain both males and females, and have from five to 20 members. They will generally stay in one place and only wander widely in search of food. The herd is usually formed around one dominant male. In captivity, males show signs of territoriality and mate guarding while captive females establish dominance hierarchies, with oldest females holding highest rank Herds are more likely to be found along the northern edge of the tropical rain system during the summer and move north as winter falls. They are able to track rainfall and will head for these areas where vegetation is more plentiful. Males are territorial and guard females, while the females establish their own dominance hierarchies. Due to its slow movements, the addax is an easy target for predators such as humans, lions, leopards, cheetahs and African wild dogs. Caracals, servals and hyenas attack calves. The addax is normally not aggressive, though individuals may charge if they are disturbed. Adaptations The addax is amply suited to live in the deep desert under extreme conditions. It can survive without free water almost indefinitely, because it gets moisture from its food and dew that condenses on plants. Scientists think the addax has a special lining in its stomach that stores water in pouches to use in times of dehydration. It also produces highly concentrated urine to conserve water. The pale colour of the coat reflects radiant heat, and the length and density of the coat helps in thermoregulation. In the day the addax huddles together in shaded areas, and on cool nights rests in sand hollows. These practices help in dissipation of body heat and saving water by cooling the body through evaporation. In a study, eight addaxes on a diet of grass hay (Chloris gayana) were studied to determine the retention time of food from the digestive tract. It was found that food retention time was long, taken as an adaptation to a diet including a high proportion of slow fermenting grasses; while the long fluid retention time could be interpreted to be due to water-saving mechanisms with low water turnover and a roomy rumen. Diet The addax lives in desert terrain where it eats grasses and leaves of what shrubs, leguminous herbs and bushes are available. Primarily a grazer, its staple foods include Aristida, Panicum, and Stipagrostis, and it will only consume browse, such as leaves of Acacia trees in the absence of these grasses. It also eats perennials which turn green and sprout at the slightest bit of humidity or rain. The addax eats only certain parts of the plant and tends to crop the Aristida grasses neatly to the same height. By contrast, when feeding on Panicum grass, the drier outer leaves are left alone while it eats the tender inner shoots and seeds. These seeds are important part of the addax's diet, being its main source of protein. Reproduction Females are sexually mature at 2 to 3 years of age and males at about 2 years. Breeding occurs throughout the year, but it peaks during winter and early spring. In the northern Sahara, breeding peaks at the end of winter and the beginning of spring; in the southern Sahara, breeding peaks from September to October and from January to mid-April. Each estrus bout lasts for one or two days. In a study, the blood serum of female addaxes was analyzed through immunoassay to know about their luteal phase. Estrous cycle duration was of about 33 days. During pregnancy, ultrasonography showed the uterine horns as coiled. The maximum diameters of the ovarian follicle and the corpus luteum were and . Each female underwent an anovulatory period lasting 39 to 131 days, during which there was no ovulation. Anovulation was rare in winter, which suggested the effect of seasons on the estrous cycle. Gestation period lasts 257–270 days (about nine months). Females may lie or stand during the delivery, during which one calf is born. A postpartum estrus occurs after two or three days. The calf weighs at birth and is weaned at 23–29 weeks old. Habitat and distribution The addax inhabits arid regions, semideserts and sandy and stony deserts. They can live in extremely arid areas, with less than 100 mm annual rainfall. It also inhabits deserts with tussock grasses (Stipagrostis species) and succulent thorn scrub Cornulaca. Formerly, the addax was widespread in the Sahelo-Saharan region of Africa, west of the Nile Valley and all countries sharing the Sahara Desert; but today the only known self-sustaining population is present in the Termit Massif Reserve (Niger). However, there are reports of sightings from the eastern Air Mountains (Niger) and Bodélé (Chad). Rare nomads may be seen in northern Niger, southern Algeria and Libya; and the addax is rumoured to be present along the Mali/Mauritania border, though there have been no confirmed sightings. The addax was once abundant in North Africa, native to Chad, Mauritania and Niger. It is extinct in Algeria, Egypt, Libya, Sudan and the Western Sahara. It has been reintroduced into Morocco and Tunisia. Threats and conservation Declines in the population of the addax have been ongoing since the mid-1800s. More recently, addaxes were found from Algeria to Sudan, but due mainly to overhunting, they have become much more restricted and rare. Addaxes are easy to hunt due to their slow movements. Roadkill, firearms for easy hunting and nomadic settlements near waterholes (their dry-season feeding places) have also decreased their numbers. Moreover, their meat and leather are highly prized. Other threats include chronic droughts in the deserts, habitat destruction due to more human settlements and agriculture. Fewer than 500 individuals are thought to exist in the wild today, most of the animals being found between the Termit area of Niger, the Bodélé region of western Chad, and the Aoukar in Mauritania. Today there are over 600 addaxes in Europe, Yotvata Hai-Bar Nature Reserve (Israel), Sabratha (Libya), Giza Zoo (Egypt), North America, Japan and Australia under captive breeding programmes. There are thousands more in private collections and ranches in the United States and the Middle East. Addaxes are legally protected in Morocco, Tunisia, and Algeria; hunting of all gazelles is forbidden in Libya and Egypt. Although enormous reserves, such as the Hoggar Mountains and Tasilli in Algeria, the Ténéré in Niger, the Ouadi Rimé-Ouadi Achim Faunal Reserve in Chad, and the newly established Wadi Howar National Park in Sudan, cover areas where the addax previously occurred, some do not keep addaxes at the present time because they lack the resources. The addax has been reintroduced into Bou-Hedma National Park (Tunisia) and Souss-Massa National Park (Morocco). Reintroductions in the wild are ongoing in Jebil National Park (Tunisia) and Grand Erg Oriental (the Sahara), and another is planned for Morocco.
Biology and health sciences
Bovidae
Animals
904954
https://en.wikipedia.org/wiki/Hives
Hives
Hives, also known as urticaria, is a kind of skin rash with red and/or flesh-colored, raised, itchy bumps. Hives may burn or sting. The patches of rash may appear on different body parts, with variable duration from minutes to days, and do not leave any long-lasting skin change. Fewer than 5% of cases last for more than six weeks (a condition known as chronic urticaria). The condition frequently recurs. Hives frequently occur following an infection or as a result of an allergic reaction such as to medication, insect bites, or food. Psychological stress, cold temperature, or vibration may also be a trigger. In half of cases the cause remains unknown. Risk factors include having conditions such as hay fever or asthma. Diagnosis is typically based on appearance. Patch testing may be useful to determine the allergy. Prevention is by avoiding whatever it is that causes the condition. Treatment is typically with antihistamines, with the second generation antihistamines such as fexofenadine, loratadine and cetirizine being preferred due to less risk of sedation and cognitive impairment. In refractory (obstinate) cases, corticosteroids or leukotriene inhibitors may also be used. Keeping the environmental temperature cool is also useful. For cases that last more than six weeks, long-term antihistamine therapy is indicated. Immunosuppressants such as omalizumab or cyclosporin may also be used. About 20% of people are affected at some point in their lives. Cases of short duration occur equally in males and females, while cases of long duration are more common in females. Cases of short duration are more common among children, while cases of long duration are more common among those who are middle aged. Hives have been described since at least the time of Hippocrates. The term urticaria is from the Latin urtica meaning "nettle". Signs and symptoms Hives, or urticaria, is a form of skin rash with red, raised, itchy bumps. They may also burn or sting. Hives can appear anywhere on the surface of the skin. Whether the trigger is allergic or not, a complex release of inflammatory mediators, including histamine from cutaneous mast cells, results in fluid leakage from superficial blood vessels. Hives may be pinpoint in size or several inches in diameter, they can be individual or confluent, coalescing into larger forms. About 20% of people are affected. Cases of short duration occur equally in males and females, lasting a few days and without leaving any long-lasting skin changes. Cases of long duration are more common in females. Cases of short duration are more common among children while cases of long duration are more common among those who are middle aged. Fewer than 5% of cases last for more than six weeks. The condition frequently recurs. In half of cases of hives, the cause remains unknown. Angioedema is a related condition (also from allergic and nonallergic causes), though fluid leakage is from much deeper blood vessels in the subcutaneous or submucosal layers. Individual hives that are painful, last more than 24 hours, or leave a bruise as they heal are more likely to be a more serious condition called urticarial vasculitis. Hives caused by stroking the skin (often linear in appearance) are due to a benign condition called dermatographic urticaria. Cause Hives can also be classified by the purported causative agent. Many different substances in the environment may cause hives, including medications, food and physical agents. In perhaps more than 50% of people with chronic hives of unknown cause, it is due to an autoimmune reaction. Risk factors include having conditions such as hay fever or asthma. Medications Drugs that have caused allergic reactions evidenced as hives include codeine, sulphate of morphia, dextroamphetamine, aspirin, ibuprofen, penicillin, clotrimazole, trichazole, sulfonamides, anticonvulsants, cefaclor, piracetam, vaccines, and antidiabetic drugs. The antidiabetic sulphonylurea glimepiride, in particular, has been documented to induce allergic reactions manifesting as hives. Food The most common food allergies in adults are shellfish and nuts. The most common food allergies in children are shellfish, nuts, eggs, wheat, and soy. One study showed Balsam of Peru, which is in many processed foods, to be the most common cause of immediate contact urticaria. Another food allergy that can cause hives is alpha-gal allergy, which may cause sensitivity to milk and red meat. A less common cause is exposure to certain bacteria, such as Streptococcus species or possibly Helicobacter pylori. Infection or environmental agent Hives including chronic spontaneous hives can be a complication and symptom of a parasitic infection, such as blastocystosis and strongyloidiasis among others. The rash that develops from poison ivy, poison oak, and poison sumac contact is commonly mistaken for urticaria. This rash is caused by contact with urushiol and results in a form of contact dermatitis called urushiol-induced contact dermatitis. Urushiol is spread by contact but can be washed off with a strong grease- or oil-dissolving detergent and cool water and rubbing ointments. Dermatographic urticaria Dermatographic urticaria (also known as dermatographism or "skin writing") is marked by the appearance of weals or welts on the skin as a result of scratching or firm stroking of the skin. Seen in 4–5% of the population, it is one of the most common types of urticaria, in which the skin becomes raised and inflamed when stroked, scratched, rubbed, and sometimes even slapped. The skin reaction usually becomes evident soon after the scratching and disappears within 30 minutes. Dermatographism is the most common form of a subset of chronic hives, acknowledged as "physical hives". It stands in contrast to the linear reddening that does not itch seen in healthy people who are scratched. In most cases, the cause is unknown, although it may be preceded by a viral infection, antibiotic therapy, or emotional upset. Dermatographism is diagnosed by applying pressure by stroking or scratching the skin. The hives should develop within a few minutes. Unless the skin is highly sensitive and reacts continually, treatment is not needed. Taking antihistamines can reduce the response in cases that are annoying to the person. Pressure or delayed pressure This type of hives can occur right away, precisely after a pressure stimulus or as a deferred response to sustained pressure being enforced to the skin. In the deferred form, the hives only appear after about six hours from the initial application of pressure to the skin. Under normal circumstances, these hives are not the same as those witnessed with most urticariae. Instead, the protrusion in the affected areas is typically more spread out. The hives may last from eight hours to three days. The source of the pressure on the skin can happen from tight fitted clothing, belts, clothing with tough straps, walking, leaning against an object, standing, sitting on a hard surface, etc. The areas of the body most commonly affected are the hands, feet, trunk, abdomen, buttocks, legs and face. Although this appears to be very similar to dermatographism, the cardinal difference is that the swelled skin areas do not become visible quickly and tend to last much longer. This form of the skin disease is, however, rare. Cholinergic or stress Cholinergic urticaria (CU) is one of the physical urticaria which is provoked during sweating events such as exercise, bathing, staying in a heated environment, or emotional stress. The hives produced are typically smaller than classic hives and are generally shorter-lasting. Multiple subtypes have been elucidated, each of which require distinct treatment. Cold-induced The cold type of urticaria is caused by exposure of the skin to extreme cold, damp and windy conditions; it occurs in two forms. The rare form is hereditary and becomes evident as hives all over the body 9 to 18 hours after cold exposure. The common form of cold urticaria demonstrates itself with the rapid onset of hives on the face, neck, or hands after exposure to cold. Cold urticaria is common and lasts for an average of five to six years. The population most affected is young adults, between 18 and 25 years old. Many people with the condition also have dermographism and cholinergic hives. Severe reactions can be seen with exposure to cold water; swimming in cold water is the most common cause of a severe reaction. This can cause a massive discharge of histamine, resulting in low blood pressure, fainting, shock and even loss of life. Cold urticaria is diagnosed by dabbing an ice cube against the skin of the forearm for 1 to 5 minutes. A distinct hive should develop if a person has cold urticaria. This is different from the normal redness that can be seen in people without cold urticaria. People with cold urticaria need to learn to protect themselves from a hasty drop in body temperature. Regular antihistamines are not generally efficacious. One particular antihistamine, cyproheptadine (Periactin), has been found to be useful. The tricyclic antidepressant doxepin has been found to be effective blocking agents of histamine. Finally, a medication named ketotifen, which keeps mast cells from discharging histamine, has also been employed with widespread success. Solar urticaria This form of the disease occurs on areas of the skin exposed to the sun; the condition becomes evident within minutes of exposure. Water-induced This type of urticaria is also termed rare and occurs upon contact with water. The response is not temperature-dependent and the skin appears similar to the cholinergic form of the disease. The appearance of hives is within one to 15 minutes of contact with the water and can last from 10 minutes to two hours. This kind of hives does not seem to be stimulated by histamine discharge like the other physical hives. Most researchers believe this condition is actually skin sensitivity to additives in the water, such as chlorine. Water urticaria is diagnosed by dabbing tap water and distilled water to the skin and observing the gradual response. Aquagenic urticaria is treated with capsaicin (Zostrix) administered to the chafed skin. This is the same treatment used for shingles. Antihistamines are of questionable benefit in this instance since histamine is not the causative factor. Exercise The condition was first distinguished in 1980. People with exercise urticaria (EU) experience hives, itchiness, shortness of breath and low blood pressure five to 30 minutes after beginning exercise. These symptoms can progress to shock and even sudden death. Jogging is the most common exercise to cause EU, but it is not induced by a hot shower, fever, or with fretfulness. This differentiates EU from cholinergic urticaria. EU sometimes occurs only when someone exercises within 30 minutes of eating particular foods, such as wheat or shellfish. For these individuals, exercising alone or eating the injuring food without exercising produces no symptoms. EU can be diagnosed by having the person exercise and then observing the symptoms. This method must be used with caution and only with the appropriate resuscitative measures at hand. EU can be differentiated from cholinergic urticaria by the hot water immersion test. In this test, the person is immersed in water at 43 °C (109.4 °F). Someone with EU will not develop hives, while a person with cholinergic urticaria will develop the characteristic small hives, especially on the neck and chest. The immediate symptoms of this type are treated with antihistamines, epinephrine and airway support. Taking antihistamines prior to exercise may be effective. Ketotifen is acknowledged to stabilise mast cells and prevent histamine release, and has been effective in treating this hives disorder. Avoiding exercise or foods that cause the mentioned symptoms is very important. In particular circumstances, tolerance can be brought on by regular exercise, but this must be under medical supervision. Pathophysiology The skin lesions of urticarial disease are caused by an inflammatory reaction in the skin, causing leakage of capillaries in the dermis, and resulting in an edema which persists until the interstitial fluid is absorbed into the surrounding cells. Hives are caused by the release of histamine and other mediators of inflammation (cytokines) from cells in the skin. This process can be the result of an allergic or nonallergic reaction, differing in the eliciting mechanism of histamine release. Allergic hives Histamine and other proinflammatory substances are released from mast cells in the skin and tissues in response to the binding of allergen-bound IgE antibodies to high-affinity cell surface receptors. Basophils and other inflammatory cells are also seen to release histamine and other mediators, and are thought to play an important role, especially in chronic urticarial diseases. Autoimmune hives Over half of all cases of chronic idiopathic hives are the result of an autoimmune trigger. Roughly 50% of people with chronic urticaria spontaneously develop autoantibodies directed at the receptor FcεRI located on skin mast cells. Chronic stimulation of this receptor leads to chronic hives. People with hives often have other autoimmune conditions, such as autoimmune thyroiditis, celiac disease, type 1 diabetes, rheumatoid arthritis, Sjögren's syndrome or systemic lupus erythematosus. Infections Hive-like rashes commonly accompany viral illnesses, such as the common cold. They usually appear three to five days after the cold has started, and may even appear a few days after the cold has resolved. Nonallergic hives Mechanisms other than allergen-antibody interactions are known to cause histamine release from mast cells. Many drugs, for example morphine, can induce direct histamine release not involving any immunoglobulin molecule. Also, a diverse group of signaling substances called neuropeptides, have been found to be involved in emotionally induced hives. Dominantly inherited cutaneous and neurocutaneous porphyrias (porphyria cutanea tarda, hereditary coproporphyria, variegate porphyria and erythropoietic protoporphyria) have been associated with solar urticaria. The occurrence of drug-induced solar urticaria may be associated with porphyrias. This may be caused by IgG binding, not IgE. Dietary histamine poisoning This is termed scombroid food poisoning. Ingestion of free histamine released by bacterial decay in fish flesh may result in a rapid-onset, allergic-type symptom complex which includes hives. However, the hives produced by scombroid is reported not to include wheals. Stress and chronic idiopathic hives Chronic idiopathic hives has been anecdotally linked to stress since the 1940s. A large body of evidence demonstrates an association between this condition and both poor emotional well-being and reduced health-related quality of life. A link between stress and this condition has also been shown. Some cases have been thought to be due to stress, including an association between post-traumatic stress and chronic idiopathic hives. In most cases of chronic idiopathic urticaria, no cause is identified. Diagnosis Diagnosis is typically based on the appearance. The cause of chronic hives can rarely be determined. Patch testing may be useful to determine the allergy. In some cases regular extensive allergy testing over a long period of time is requested in hopes of getting new insight. No evidence shows regular allergy testing results in identification of a problem or relief for people with chronic hives. Regular allergy testing for people with chronic hives is not recommended. Acute versus chronic urticaria is defined as the presence of evanescent wheals which completely resolve within six weeks. Acute urticaria becomes evident a few minutes after the person has been exposed to an allergen. The outbreak may last several weeks, but usually the hives are gone in six weeks. Typically, the hives are a reaction to food, but in about half the cases, the trigger is unknown. Common foods may be the cause, as well as bee or wasp stings, or skin contact with certain fragrances. Acute viral infection is another common cause of acute urticaria (viral exanthem). Less common causes of hives include friction, pressure, temperature extremes, exercise, and sunlight. urticaria is defined as the presence of hives which persist for greater than six weeks. Some of the more severe chronic cases have lasted more than 20 years. A survey indicated chronic urticaria lasted a year or more in more than 50% of those affected and 20 years or more in 20% of them. Provocative skin challenge testing may be done in those with chronic urticaria, in which the skin is exposed to pressure (dermatographisim), cold temperatures, warm temperatures, or light in an attempt to provoke symptoms and aid in the diagnosis. The history of physical examination guide the diagnosis of chronic urticaria, with extensive lab testing not recommended. Acute and chronic hives are visually indistinguishable on visual inspection alone. Related conditions Angioedema Angioedema is similar to hives, but in angioedema, the swelling occurs in a lower layer of the dermis than in hives, as well as in the subcutis. This swelling can occur around the mouth, eyes, in the throat, in the abdomen, or in other locations. Hives and angioedema sometimes occur together in response to an allergen, and is a concern in severe cases, as angioedema of the throat can be fatal. Vibratory angioedema This very rare form of angioedema develops in response to contact with vibration. In vibratory angioedema, symptoms develop within two to five minutes after contact with a vibrating object and abate after about an hour. People with this disorder do not experience dermographism or pressure urticaria. Vibratory angioedema is diagnosed by holding a vibrating device such as a laboratory vortex machine against the forearm for four minutes. Speedy swelling of the whole forearm extending into the upper arm is also noted later. The principal treatment is avoidance of vibratory stimulants. Antihistamines have also been proven helpful. Management The mainstay of therapy for both acute and chronic hives is education, avoiding triggers and using antihistamines. Chronic hives can be difficult to treat and lead to significant disability. Unlike the acute form, 50–80% of people with chronic hives have no identifiable triggers. But 50% of people with chronic hives will experience remission within 1 year. Overall, treatment is geared towards symptomatic management. Individuals with chronic hives may need other medications in addition to antihistamines to control symptoms. People who experience hives with angioedema require emergency treatment as this is a life-threatening condition. Treatment guidelines for the management of chronic hives have been published. According to the 2014 American practice parameters, treatment involves a stepwise approach. Step 1 consists of second generation, H1 receptor blocking antihistamines. Systemic glucocorticoids can also be used for episodes of severe disease but should not be used for long term due to their long list of side effects. Step 2 consists of increasing the dose of the current antihistamine, adding other antihistamines, or adding a leukotriene receptor antagonist such as montelukast. Step 3 consists of adding or replacing the current treatment with hydroxyzine or doxepin. If the individual doesn't respond to steps 1–3 then they are considered to have refractory symptoms. At this point, anti-inflammatory medications (dapsone, sulfasalazine), immunosuppressants (cyclosporin, sirolimus) or other medications like omalizumab can be used. These options are explained in more detail below. First generation antihistamines, such as diphenhydramine or hydroxyzine, are not recommended as a first line therapy as they block both brain and peripheral H1 receptors, causing sedation. Second-generation antihistamines, such as loratadine, cetirizine, fexofenadine or desloratadine, selectively antagonize peripheral H1 receptors, and are less sedating, less anticholinergic, and generally preferred over the first-generation antihistamines. Fexofenadine, a new-generation antihistamine that blocks histamine H1 receptors, may be less sedating than some second-generation antihistamines. People who do not respond to the maximum dose of H1 antihistamines may benefit from increasing the dose further, then to switching to another non-sedating antihistamine, then to adding a leukotriene antagonist, then to using an older antihistamine, then to using systemic steroids and finally to using ciclosporin or omalizumab. Steroids are often associated with rebound hives once discontinued. H2-receptor antagonists are sometimes used in addition to H1-antagonists to treat urticaria, but there is limited evidence for their efficacy. Systemic steroids Oral glucocorticoids are effective in controlling symptoms of chronic hives. However, they have an extensive list of adverse effects, such as adrenal suppression, weight gain, osteoporosis, hyperglycemia, etc. Therefore, their use should be limited to a couple of weeks. In addition, one study found that systemic glucocorticoids combined with antihistamines did not hasten the time to symptom control compared with antihistamines alone. Leukotriene-receptor antagonists Leukotrienes are released from mast cells along with histamine. The medications, montelukast and zafirlukast block leukotriene receptors and can be used as add on treatment or in isolation for people with CU. It is important to note that these medications may be more beneficial for people with NSAID induced CU. Other Other options for refractory symptoms of chronic hives include anti-inflammatory medications, omalizumab, and immunosuppressants. Potential anti-inflammatory agents include dapsone, sulfasalazine, and hydroxychloroquine. Dapsone is a sulfone antimicrobial agent and is thought to suppress prostaglandin and leukotriene activity. It is helpful in therapy-refractory cases and is contraindicated in people with G6PD deficiency. Sulfasalazine, a 5-ASA derivative, is thought to alter adenosine release and inhibit IgE mediated mast cell degranulation, Sulfasalazine is a good option for people with anemia who cannot take dapsone. Hydroxychloroquine is an antimalarial agent that suppresses T lymphocytes. It has a low cost however it takes longer than dapsone or sulfasalazine to work. Omalizumab was approved by the FDA in 2014 for people with hives 12 years old and above with chronic hives. It is a monoclonal antibody directed against IgE. Significant improvement in pruritus and quality of life was observed in a phase III, multicenter, randomized control trial. Immunosuppressants used for CU include cyclosporine, tacrolimus, sirolimus, and mycophenolate. Calcineurin inhibitors, such as cyclosporine and tacrolimus, inhibit cell responsiveness to mast cell products and inhibit T cell activity. They are preferred by some experts to treat severe symptoms. Sirolimus and mycophenolate have less evidence for their use in the treatment of chronic hives but reports have shown them to be efficacious. Immunosuppressants are generally reserved as the last line of therapy for severe cases due to their potential for serious adverse effects. Prognosis In those with chronic urticaria, defined as either continuous or intermittent symptoms lasting longer than 6 weeks, 35% of people are symptom free 1 year after treatment, while 29% have a reduction in their symptoms. Those with a longer disease duration typically have a worse prognosis, with greater symptom severity. Chronic urticaria is often accompanied by an intense pruritus, and other symptoms associated with a reduced quality of life and a high burden of co-morbid psychiatric conditions such as anxiety and depression. Epidemiology Chronic urticaria is usually seen in those older than 40 years, it is more common in women. The prevalence of chronic urticaria is 0.23% in the United States. Research Afamelanotide is being studied as a hives treatment. Opioid antagonists such as naltrexone have tentative evidence to support their use. History The term urticaria was first used by the Scottish physician William Cullen in 1769. It originates from the Latin word urtica, meaning stinging hair or nettle, as the classical presentation follows the contact with a perennial flowering plant Urtica dioica. The history of urticaria dates back to 1000–2000 BC with its reference as a wind-type concealed rash in the book The Yellow Emperor's Inner Classic from Huangdi Neijing. Hippocrates in the 4th century first described urticaria as "knidosis" after the Greek word knido for nettle. The discovery of mast cells by Paul Ehrlich in 1879 brought urticaria and similar conditions under a comprehensive idea of allergic conditions.
Biology and health sciences
Symptoms and signs
Health
906156
https://en.wikipedia.org/wiki/Hydrogen%20economy
Hydrogen economy
The hydrogen economy is an umbrella term for the roles hydrogen can play alongside low-carbon electricity to reduce emissions of greenhouse gases. The aim is to reduce emissions where cheaper and more energy-efficient clean solutions are not available. In this context, hydrogen economy encompasses the production of hydrogen and the use of hydrogen in ways that contribute to phasing-out fossil fuels and limiting climate change. Hydrogen can be produced by several means. Most hydrogen produced today is gray hydrogen, made from natural gas through steam methane reforming (SMR). This process accounted for 1.8% of global greenhouse gas emissions in 2021. Low-carbon hydrogen, which is made using SMR with carbon capture and storage (blue hydrogen), or through electrolysis of water using renewable power (green hydrogen), accounted for less than 1% of production. Virtually all of the 100 million tonnes of hydrogen produced each year is used in oil refining (43% in 2021) and industry (57%), principally in the manufacture of ammonia for fertilizers, and methanol. To limit global warming, it is generally envisaged that the future hydrogen economy replaces gray hydrogen with low-carbon hydrogen. As of 2024 it is unclear when enough low-carbon hydrogen could be produced to phase-out all the gray hydrogen. The future end-uses are likely in heavy industry (e.g. high-temperature processes alongside electricity, feedstock for production of green ammonia and organic chemicals, as alternative to coal-derived coke for steelmaking), long-haul transport (e.g. shipping, and to a lesser extent hydrogen-powered aircraft and heavy goods vehicles), and long-term energy storage. Other applications, such as light duty vehicles and heating in buildings, are no longer part of the future hydrogen economy, primarily for economic and environmental reasons. Hydrogen is challenging to store, to transport in pipelines, and to use. It presents safety concerns since it is highly explosive, and it is inefficient compared to direct use of electricity. Since relatively small amounts of low-carbon hydrogen are available, climate benefits can be maximized by using it in harder-to-decarbonize applications. there are no real alternatives to hydrogen for several chemical processes in which it is currently used, such as ammonia production for fertilizer. The cost of low- and zero-carbon hydrogen is likely to influence the degree to which it will be used in chemical feedstocks, long haul aviation and shipping, and long-term energy storage. Production costs of low- and zero-carbon hydrogen are evolving. Future costs may be influenced by carbon taxes, the geography and geopolitics of energy, energy prices, technology choices, and their raw material requirements. It is likely that green hydrogen will see the greatest reductions in production cost over time. The U.S. Department of Energy's Hydrogen Hotshot Initiative seeks to reduce the cost of green hydrogen drop to $1 a kilogram during the 2030s. History and objectives Origins The concept of a society that uses hydrogen as the primary means of energy storage was theorized by geneticist J. B. S. Haldane in 1923. Anticipating the exhaustion of Britain's coal reserves for power generation, Haldane proposed a network of wind turbines to produce hydrogen and oxygen for long-term energy storage through electrolysis, to help address renewable power's variable output. The term "hydrogen economy" itself was coined by John Bockris during a talk he gave in 1970 at General Motors (GM) Technical Center. Bockris viewed it as an economy in which hydrogen, underpinned by nuclear and solar power, would help address growing concern about fossil fuel depletion and environmental pollution, by serving as energy carrier for end-uses in which electrification was not suitable. A hydrogen economy was proposed by the University of Michigan to solve some of the negative effects of using hydrocarbon fuels where the carbon is released to the atmosphere (as carbon dioxide, carbon monoxide, unburnt hydrocarbons, etc.). Modern interest in the hydrogen economy can generally be traced to a 1970 technical report by Lawrence W. Jones of the University of Michigan, in which he echoed Bockris' dual rationale of addressing energy security and environmental challenges. Unlike Haldane and Bockris, Jones only focused on nuclear power as the energy source for electrolysis, and principally on the use of hydrogen in transport, where he regarded aviation and heavy goods transport as the top priorities. Later evolution A spike in attention for the hydrogen economy concept during the 2000s was repeatedly described as hype by some critics and proponents of alternative technologies, and investors lost money in the bubble. Interest in the energy carrier resurged in the 2010s, notably with the forming of the World Hydrogen Council in 2017. Several manufacturers released hydrogen fuel cell cars commercially, with manufacturers such as Toyota, Hyundai, and industry groups in China having planned to increase numbers of the cars into the hundreds of thousands over the next decade. The global scope for hydrogen's role in cars is shrinking relative to earlier expectations. By the end of 2022, 70,200 hydrogen vehicles had been sold worldwide, compared with 26 million plug-in electric vehicles. Early 2020s takes on the hydrogen economy share earlier perspectives' emphasis on the complementarity of electricity and hydrogen, and the use of electrolysis as the mainstay of hydrogen production. They focus on the need to limit global warming to 1.5 °C and prioritize the production, transportation and use of green hydrogen for heavy industry (e.g. high-temperature processes alongside electricity, feedstock for production of green ammonia and organic chemicals, as alternative to coal-derived coke for steelmaking), long-haul transport (e.g. shipping, aviation and to a lesser extent heavy goods vehicles), and long-term energy storage. Current hydrogen market Hydrogen production globally was valued at over US$155 billion in 2022 and is expected to grow over 9% annually through 2030. In 2021, 94 million tonnes (Mt) of molecular hydrogen () was produced. Of this total, approximately one sixth was as a by-product of petrochemical industry processes. Most hydrogen comes from dedicated production facilities, over 99% of which is from fossil fuels, mainly via steam reforming of natural gas (70%) and coal gasification (30%, almost all of which in China). Less than 1% of dedicated hydrogen production is low carbon: steam fossil fuel reforming with carbon capture and storage, green hydrogen produced using electrolysis, and hydrogen produced from biomass. CO2 emissions from 2021 production, at 915 MtCO2, amounted to 2.5% of energy-related CO2 emissions and 1.8% of global greenhouse gas emissions. Virtually all hydrogen produced for the current market is used in oil refining (40 Mt in 2021) and industry (54 MtH2). In oil refining, hydrogen is used, in a process known as hydrocracking, to convert heavy petroleum sources into lighter fractions suitable for use as fuels. Industrial uses mainly comprise ammonia production to make fertilizers (34 Mt in 2021), methanol production (15 Mt) and the manufacture of direct reduced iron (5 Mt). Production Green methanol Green methanol is a liquid fuel that is produced from combining carbon dioxide and hydrogen () under pressure and heat with catalysts. It is a way to reuse carbon capture for recycling. Methanol can store hydrogen economically at standard outdoor temperatures and pressures, compared to liquid hydrogen and ammonia that need to use a lot of energy to stay cold in their liquid state. In 2023 the Laura Maersk was the first container ship to run on methanol fuel. Ethanol plants in the midwest are a good place for pure carbon capture to combine with hydrogen to make green methanol, with abundant wind and nuclear energy in Iowa, Minnesota, and Illinois. Mixing methanol with ethanol could make methanol a safer fuel to use because methanol doesn't have a visible flame in the daylight and doesn't emit smoke, and ethanol has a visible light yellow flame. Green hydrogen production of 70% efficiency and a 70% efficiency of methanol production from that would be a 49% energy conversion efficiency. Uses Hydrogen can be deployed as a fuel in two distinct ways: in fuel cells which produce electricity, and via combustion to generate heat. When hydrogen is consumed in fuel cells, the only emission at the point of use is water vapor. Combustion of hydrogen can lead to the thermal formation of harmful nitrogen oxides emissions. Industry In the context of limiting global warming, low-carbon hydrogen (particularly green hydrogen) is likely to play an important role in decarbonizing industry. Hydrogen fuel can produce the intense heat required for industrial production of steel, cement, glass, and chemicals, thus contributing to the decarbonization of industry alongside other technologies, such as electric arc furnaces for steelmaking. However, it is likely to play a larger role in providing industrial feedstock for cleaner production of ammonia and organic chemicals. For example, in steelmaking, hydrogen could function as a clean energy carrier and also as a low-carbon catalyst replacing coal-derived coke. The imperative to use low-carbon hydrogen to reduce greenhouse gas emissions has the potential to reshape the geography of industrial activities, as locations with appropriate hydrogen production potential in different regions will interact in new ways with logistics infrastructure, raw material availability, human and technological capital. Transport Much of the interest in the hydrogen economy concept is focused on hydrogen vehicles, particularly planes. Hydrogen vehicles produce significantly less local air pollution than conventional vehicles. By 2050, the energy requirement for transportation might be between 20% and 30% fulfilled by hydrogen and synthetic fuels. Hydrogen used to decarbonize transportation is likely to find its largest applications in shipping, aviation and to a lesser extent heavy goods vehicles, through the use of hydrogen-derived synthetic fuels such as ammonia and methanol, and fuel cell technology. Hydrogen has been used in fuel cell buses for many years. It is also used as a fuel for spacecraft propulsion. In the International Energy Agency's 2022 Net Zero Emissions Scenario (NZE), hydrogen is forecast to account for 2% of rail energy demand in 2050, while 90% of rail travel is expected to be electrified by then (up from 45% today). Hydrogen's role in rail would likely be focused on lines that prove difficult or costly to electrify. The NZE foresees hydrogen meeting approximately 30% of heavy truck energy demand in 2050, mainly for long-distance heavy freight (with battery electric power accounting for around 60%). Although hydrogen can be used in adapted internal combustion engines, fuel cells, being electrochemical, have an efficiency advantage over heat engines. Fuel cells are more expensive to produce than common internal combustion engines but also require higher purity hydrogen fuel than internal combustion engines. In the light road vehicle segment including passenger cars, by the end of 2022, 70,200 fuel cell electric vehicles had been sold worldwide, compared with 26 million plug-in electric vehicles. With the rapid rise of electric vehicles and associated battery technology and infrastructure, hydrogen's role in cars is minuscule. Energy system balancing and storage Green hydrogen, from electrolysis of water, has the potential to address the variability of renewable energy output. Producing green hydrogen can both reduce the need for renewable power curtailment during periods of high renewables output and be stored long-term to provide for power generation during periods of low output. Ammonia An alternative to gaseous hydrogen as an energy carrier is to bond it with nitrogen from the air to produce ammonia, which can be easily liquefied, transported, and used (directly or indirectly) as a clean and renewable fuel. Among disadvantages of ammonia as an energy carrier are its high toxicity, energy efficiency of production from and , and poisoning of PEM Fuel Cells by traces of non-decomposed after to conversion. Buildings Numerous industry groups (gas networks, gas boiler manufacturers) across the natural gas supply chain are promoting hydrogen combustion boilers for space and water heating, and hydrogen appliances for cooking, to reduce energy-related CO2 emissions from residential and commercial buildings. The proposition is that current end-users of piped natural gas can await the conversion of and supply of hydrogen to existing natural gas grids, and then swap heating and cooking appliances, and that there is no need for consumers to do anything now. A review of 32 studies on the question of hydrogen for heating buildings, independent of commercial interests, found that the economics and climate benefits of hydrogen for heating and cooking generally compare very poorly with the deployment of district heating networks, electrification of heating (principally through heat pumps) and cooking, the use of solar thermal, waste heat and the installation of energy efficiency measures to reduce energy demand for heat. Due to inefficiencies in hydrogen production, using blue hydrogen to replace natural gas for heating could require three times as much methane, while using green hydrogen would need two to three times as much electricity as heat pumps. Hybrid heat pumps, which combine the use of an electric heat pump with a hydrogen boiler, may play a role in residential heating in areas where upgrading networks to meet peak electrical demand would otherwise be costly. The widespread use of hydrogen for heating buildings would entail higher energy system costs, higher heating costs and higher environmental impacts than the alternatives, although a niche role may be appropriate in specific contexts and geographies. If deployed, using hydrogen in buildings would drive up the cost of hydrogen for harder-to-decarbonize applications in industry and transport. Bio-SNG although technically possible production of syngas from hydrogen and carbon-dioxide from bio-energy with carbon capture and storage (BECCS) via the Sabatier reaction is limited by the amount of sustainable bioenergy available: therefore any bio-SNG made may be reserved for production of aviation biofuel. Safety Hydrogen poses a number of hazards to human safety, from potential detonations and fires when mixed with air to being an asphyxiant in its pure, oxygen-free form. In addition, liquid hydrogen is a cryogen and presents dangers (such as frostbite) associated with very cold liquids. Hydrogen dissolves in many metals and in addition to leaking out, may have adverse effects on them, such as hydrogen embrittlement, leading to cracks and explosions. Hydrogen is flammable when mixed even in small amounts with ordinary air. Ignition can occur at a volumetric ratio of hydrogen to air as low as 4%. Moreover, hydrogen fire, while being extremely hot, is almost invisible, and thus can lead to accidental burns. Hydrogen infrastructure Storage Power plants Xcel Energy is going to build two combined cycle power plants in the Midwest that can mix 30% hydrogen with the natural gas. Intermountain Power Plant is being retrofitted to a natural gas/hydrogen power plant that can run on 30% hydrogen as well, and is scheduled to run on pure hydrogen by 2045. Costs More widespread use of hydrogen in economies entails the need for investment and costs in its production, storage, distribution and use. Estimates of hydrogen's cost are therefore complex and need to make assumptions about the cost of energy inputs (typically gas and electricity), production plant and method (e.g. green or blue hydrogen), technologies used (e.g. alkaline or proton exchange membrane electrolysers), storage and distribution methods, and how different cost elements might change over time. These factors are incorporated into calculations of the levelized costs of hydrogen (LCOH). The following table shows a range of estimates of the levelized costs of gray, blue, and green hydrogen, expressed in terms of US$ per kg of H2 (where data provided in other currencies or units, the average exchange rate to US dollars in the given year are used, and 1 kg of H2 is assumed to have a calorific value of 33.3kWh). The range of cost estimates for commercially available hydrogen production methods is broad, As of 2022, gray hydrogen is cheapest to produce without a tax on its CO2 emissions, followed by blue and green hydrogen. Blue hydrogen production costs are not anticipated to fall substantially by 2050, can be expected to fluctuate with natural gas prices and could face carbon taxes for uncaptured emissions. The cost of electrolysers fell by 60% from 2010 to 2022, before rising slightly due to an increasing cost of capital. Their cost is projected to fall significantly to 2030 and 2050, driving down the cost of green hydrogen alongside the falling cost of renewable power generation. It is cheapest to produce green hydrogen with surplus renewable power that would otherwise be curtailed, which favors electrolyzers capable of responding to low and variable power levels. A 2022 Goldman Sachs analysis anticipates that globally green hydrogen will achieve cost parity with grey hydrogen by 2030, earlier if a global carbon tax is placed on gray hydrogen. In terms of cost per unit of energy, blue and gray hydrogen will always cost more than the fossil fuels used in its production, while green hydrogen will always cost more than the renewable electricity used to make it. Subsidies for clean hydrogen production are much higher in the US and EU than in India. Examples and pilot programs The distribution of hydrogen for the purpose of transportation is being tested around the world, particularly in the US (California, Massachusetts), Canada, Japan, the EU (Portugal, Norway, Denmark, Germany), and Iceland. An indicator of the presence of large natural gas infrastructures already in place in countries and in use by citizens is the number of natural gas vehicles present in the country. The countries with the largest amount of natural gas vehicles are (in order of magnitude): Iran, China, Pakistan, Argentina, India, Brazil, Italy, Colombia, Thailand, Uzbekistan, Bolivia, Armenia, Bangladesh, Egypt, Peru, Ukraine, the United States. Natural gas vehicles can also be converted to run on hydrogen. Also, in a few private homes, fuel cell micro-CHP plants can be found, which can operate on hydrogen, or other fuels as natural gas or LPG. Australia Western Australia's Department of Planning and Infrastructure operated three Daimler Chrysler Citaro fuel cell buses as part of its Sustainable Transport Energy for Perth Fuel Cells Bus Trial in Perth. The buses were operated by Path Transit on regular Transperth public bus routes. The trial began in September 2004 and concluded in September 2007. The buses' fuel cells used a proton exchange membrane system and were supplied with raw hydrogen from a BP refinery in Kwinana, south of Perth. The hydrogen was a byproduct of the refinery's industrial process. The buses were refueled at a station in the northern Perth suburb of Malaga. In October 2021, Queensland Premier Annastacia Palaszczuk and Andrew Forrest announced that Queensland will be home to the world's largest hydrogen plant. In Australia, the Australian Renewable Energy Agency (ARENA) has invested $55 million in 28 hydrogen projects, from early stage research and development to early stage trials and deployments. The agency's stated goal is to produce hydrogen by electrolysis for $2 per kilogram, announced by Minister for Energy and Emissions Angus Taylor in a 2021 Low Emissions Technology Statement. European Union Countries in the EU which have a relatively large natural gas pipeline system already in place include Belgium, Germany, France, and the Netherlands. In 2020, The EU launched its European Clean Hydrogen Alliance (ECHA). France Green hydrogen has become more common in France. A €150 million Green Hydrogen Plan was established in 2019, and it calls for building the infrastructure necessary to create, store, and distribute hydrogen as well as using the fuel to power local transportation systems like buses and trains. Corridor H2, a similar initiative, will create a network of hydrogen distribution facilities in Occitania along the route between the Mediterranean and the North Sea. The Corridor H2 project will get a €40 million loan from the EIB. Germany German car manufacturer BMW has been working with hydrogen for years.. The German government has announced plans to hold tenders for 5.5 GW of new hydrogen-ready gas-fired power plants and 2 GW of "comprehensive H2-ready modernisations" of existing gas power stations at the end of 2024 or beginning of 2025 Iceland Iceland has committed to becoming the world's first hydrogen economy by the year 2050. Iceland is in a unique position. Presently, it imports all the petroleum products necessary to power its automobiles and fishing fleet. Iceland has large geothermal resources, so much that the local price of electricity actually is lower than the price of the hydrocarbons that could be used to produce that electricity. Iceland already converts its surplus electricity into exportable goods and hydrocarbon replacements. In 2002, it produced 2,000 tons of hydrogen gas by electrolysis, primarily for the production of ammonia (NH3) for fertilizer. Ammonia is produced, transported, and used throughout the world, and 90% of the cost of ammonia is the cost of the energy to produce it. Neither industry directly replaces hydrocarbons. Reykjavík, Iceland, had a small pilot fleet of city buses running on compressed hydrogen, and research on powering the nation's fishing fleet with hydrogen is under way (for example by companies as Icelandic New Energy). For more practical purposes, Iceland might process imported oil with hydrogen to extend it, rather than to replace it altogether. The Reykjavík buses are part of a larger program, HyFLEET:CUTE, operating hydrogen fueled buses in eight European cities. HyFLEET:CUTE buses were also operated in Beijing, China and Perth, Australia (see below). A pilot project demonstrating a hydrogen economy is operational on the Norwegian island of Utsira. The installation combines wind power and hydrogen power. In periods when there is surplus wind energy, the excess power is used for generating hydrogen by electrolysis. The hydrogen is stored, and is available for power generation in periods when there is little wind. India India is said to adopt hydrogen and H-CNG, due to several reasons, amongst which the fact that a national rollout of natural gas networks is already taking place and natural gas is already a major vehicle fuel. In addition, India suffers from extreme air pollution in urban areas. According to some estimates, nearly 80% of India's hydrogen is projected to be green, driven by cost declines and new production technologies. Currently however, hydrogen energy is just at the Research, Development and Demonstration (RD&D) stage. As a result, the number of hydrogen stations may still be low, although much more are expected to be introduced soon. Poland It planning open first hydrogen publication stations, The Ministry of Climate and Environment (MKiŚ) will soon schan competitions for 2-3 hydrogen refueling stations, Polish Deputy Minister in this ministry Krzysztof Bolesta. Saudi Arabia Saudi Arabia as a part of the NEOM project, is looking to produce roughly 1.2 million tonnes of green ammonia a year, beginning production in 2025. In Cairo, Egypt, Saudi real estate funding skyscraper project powered by hydrogen. Turkey The Turkish Ministry of Energy and Natural Resources and the United Nations Industrial Development Organization created the International Centre for Hydrogen Energy Technologies (UNIDO-ICHET) in Istanbul in 2004 and it ran to 2012. In 2023 the ministry published a Hydrogen Technologies Strategy and Roadmap. United Kingdom The UK started a fuel cell pilot program in January 2004, the program ran two Fuel cell buses on route 25 in London until December 2005, and switched to route RV1 until January 2007. The Hydrogen Expedition is currently working to create a hydrogen fuel cell-powered ship and using it to circumnavigate the globe, as a way to demonstrate the capability of hydrogen fuel cells. In August 2021 the UK Government claimed it was the first to have a Hydrogen Strategy and produced a document. In August 2021, Chris Jackson quit as chair of the UK Hydrogen and Fuel Cell Association, a leading hydrogen industry association, claiming that UK and Norwegian oil companies had intentionally inflated their cost projections for blue hydrogen in order to maximize future technology support payments by the UK government. United States Several domestic U.S. automobile companies have developed vehicles using hydrogen, such as GM and Toyota. However, as of February 2020, infrastructure for hydrogen was underdeveloped except in some parts of California. The United States have their own hydrogen policy. A joint venture between NREL and Xcel Energy is combining wind power and hydrogen power in the same way in Colorado. Hydro in Newfoundland and Labrador are converting the current wind-diesel Power System on the remote island of Ramea into a Wind-Hydrogen Hybrid Power Systems facility. Five pump station hubs being delivered to heavy-duty H2 trucks in Texas. Hydrogen City built Green by Hydrogen International (GHI), to planning open in 2026. In 2006, Florida’s infrastructure project was commissioned. First opened Orlando as public bus transportation, Ford Motor Company announced putting a fleet of hydrogen-fueled Ford E-450. Liquidated hydrogen mobile system was constructed at Titusville. An FPL’s pilot clean hydrogen facility operated in Okeechobee County. A similar pilot project on Stuart Island uses solar power, instead of wind power, to generate electricity. When excess electricity is available after the batteries are fully charged, hydrogen is generated by electrolysis and stored for later production of electricity by fuel cell. The US also have a large natural gas pipeline system already in place. Vietnam Việt Nam Energy Association have included green hydrogenation support. Australian clean energy company Pure Hydrogen Corporation Limited announced on July 22 that it has signed an MoU with Vietnam public transportation.
Technology
Fuel
null
906475
https://en.wikipedia.org/wiki/Athlete%27s%20foot
Athlete's foot
Athlete's foot, known medically as tinea pedis, is a common skin infection of the feet caused by a fungus. Signs and symptoms often include itching, scaling, cracking and redness. In rare cases the skin may blister. Athlete's foot fungus may infect any part of the foot, but most often grows between the toes. The next most common area is the bottom of the foot. The same fungus may also affect the nails or the hands. It is a member of the group of diseases known as tinea. Athlete's foot is caused by a number of different funguses, including species of Trichophyton, Epidermophyton, and Microsporum. The condition is typically acquired by coming into contact with infected skin, or fungus in the environment. Common places where the funguses can survive are around swimming pools and in locker rooms. They may also be spread from other animals. Usually diagnosis is made based on signs and symptoms; however, it can be confirmed either by culture or seeing hyphae using a microscope. Athlete's foot is not limited to just athletes: it can be caused by going barefoot in public showers, letting toenails grow too long, wearing shoes that are too tight, or not changing socks daily. It can be treated with topical antifungal medications such as clotrimazole or, for persistent infections, using oral antifungal medications such as terbinafine. Topical creams are typically recommended to be used for four weeks. Keeping infected feet dry and wearing sandals also assists with treatment. Athlete's foot was first medically described in 1908. Globally, athlete's foot affects about 15% of the population. Males are more often affected than females. It occurs most frequently in older children or younger adults. Historically it is believed to have been a rare condition that became more frequent in the 20th century due to the greater use of shoes, health clubs, war, and travel. Signs and symptoms Athlete's foot is divided into four categories or presentations: chronic interdigital, plantar (chronic scaly; aka "moccasin foot"), acute ulcerative, and vesiculobullous. "Interdigital" means between the toes. "Plantar" here refers to the sole of the foot. The ulcerative condition includes macerated lesions with scaly borders. Maceration is the softening and breaking down of skin due to extensive exposure to moisture. A vesiculobullous disease is a type of mucocutaneous disease characterized by vesicles and bullae (blisters). Both vesicles and bullae are fluid-filled lesions, and they are distinguished by size (vesicles being less than 5 mm and bullae being larger than 5 mm, depending upon what definition is used). Athlete's foot occurs most often between the toes (interdigital), with the space between the fourth and fifth digits (the little toe and the fore toe) most commonly affected. Cases of interdigital athlete's foot caused by Trichophyton rubrum may be symptomless, it may itch, or the skin between the toes may appear red or ulcerative (scaly, flaky, with soft and white if skin has been kept wet), with or without itching. An acute ulcerative variant of interdigital athlete's foot caused by T. mentagrophytes is characterized by pain, maceration of the skin, erosions and fissuring of the skin, crusting, and an odor due to secondary bacterial infection. Plantar athlete's foot (moccasin foot) is also caused by T. rubrum which typically causes asymptomatic, slightly erythematous plaques (areas of redness of the skin) to form on the plantar surface (sole) of the foot that are often covered by fine, powdery hyperkeratotic scales. The vesiculobullous type of athlete's foot is less common and is usually caused by T. mentagrophytes and is characterized by a sudden outbreak of itchy blisters and vesicles on an erythematous base, usually appearing on the sole of the foot. This subtype of athlete's foot is often complicated by secondary bacterial infection by Streptococcus pyogenes or Staphylococcus aureus. Complications As the disease progresses, the skin may crack, leading to bacterial skin infection and inflammation of the lymphatic vessels. If allowed to grow for too long, athlete's foot fungus may spread to infect the toenails, feeding on the keratin in them, a condition called onychomycosis. Because athlete's foot may itch, it may also elicit the scratch reflex, causing the host to scratch the infected area before they realize it. Scratching can further damage the skin and worsen the condition by allowing the fungus to more easily spread and thrive. The itching sensation associated with athlete's foot can be so severe that it may cause hosts to scratch vigorously enough to inflict excoriations (open wounds), which are susceptible to bacterial infection. Further scratching may remove scabs, inhibiting the healing process. Scratching infected areas may also spread the fungus to the fingers and under the fingernails. If not washed away soon enough, it can infect the fingers and fingernails, growing in the skin and in the nails (not just underneath). After scratching, it can be spread to wherever the person touches, including other parts of the body and to one's environment. Scratching also causes infected skin scales to fall off into one's environment, leading to further possible spread. When athlete's foot fungus or infested skin particles spread to one's environment (such as to clothes, shoes, bathroom, etc.) whether through scratching, falling, or rubbing off, not only can they infect other people, they can also reinfect (or further infect) the host they came from. For example, infected feet infest one's socks and shoes which further expose the feet to the fungus and its spores when worn again. The ease with which the fungus spreads to other areas of the body (on one's fingers) poses another complication. When the fungus is spread to other parts of the body, it can easily be spread back to the feet after the feet have been treated. And because the condition is called something else in each place it takes hold (e.g., tinea corporis (ringworm) or tinea cruris (jock itch)), persons infected may not be aware it is the same disease. Some individuals may experience an allergic response to the fungus called an id reaction in which blisters or vesicles can appear in areas such as the hands, chest, and arms. Treatment of the underlying infection typically results in the disappearance of the id reaction. Causes Athlete's foot is a form of dermatophytosis (fungal infection of the skin), caused by dermatophytes, funguses (most of which are mold) which inhabit dead layers of skin and digest keratin. Dermatophytes are anthropophilic, meaning these parasitic funguses prefer human hosts. Athlete's foot is most commonly caused by the molds known as Trichophyton rubrum and T. mentagrophytes, but may also be caused by Epidermophyton floccosum. Most cases of athlete's foot in the general population are caused by T. rubrum; however, the majority of athlete's foot cases in athletes are caused by T. mentagrophytes. Transmission According to the UK's National Health Service, "Athlete's foot is very contagious and can be spread through direct and indirect contact." The disease may spread to others directly when they touch the infection. People can contract the disease indirectly by coming into contact with contaminated items (clothes, towels, etc.) or surfaces (such as bathroom, shower, or locker room floors). The funguses that cause athlete's foot can easily spread to one's environment. Funguses rub off of fingers and bare feet, but also travel on the dead skin cells that continually fall off the body. Athlete's foot funguses and infested skin particles and flakes may spread to socks, shoes, clothes, to other people, pets (via petting), bed sheets, bathtubs, showers, sinks, counters, towels, rugs, floors, and carpets. When the fungus has spread to pets, it can subsequently spread to the hands and fingers of people who pet them. If a pet frequently gnaws upon itself, it might not be fleas it is reacting to, it may be the insatiable itch of tinea. One way to contract athlete's foot is to get a fungal infection somewhere else on the body first. The funguses causing athlete's foot may spread from other areas of the body to the feet, usually by touching or scratching the affected area, thereby getting the fungus on the fingers, and then touching or scratching the feet. While the fungus remains the same, the name of the condition changes based on where on the body the infection is located. For example, the infection is known as tinea corporis ("ringworm") when the torso or limbs are affected or tinea cruris (jock itch or dhobi itch) when the groin is affected. Clothes (or shoes), body heat, and sweat can keep the skin warm and moist, just the environment the fungus needs to thrive. Risk factors Besides being exposed to any of the modes of transmission presented above, there are additional risk factors that increase one's chance of contracting athlete's foot. Persons who have had athlete's foot before are more likely to become infected than those who have not. Adults are more likely to catch athlete's foot than children. Men have a higher chance of getting athlete's foot than women. People with diabetes or weakened immune systems are more susceptible to the disease. HIV/AIDS hampers the immune system and increases the risk of acquiring athlete's foot. Hyperhidrosis (abnormally increased sweating) increases the risk of infection and makes treatment more difficult. Diagnosis When visiting a doctor, the basic diagnosis procedure applies. This includes checking the patient's medical history and medical record for risk factors, a medical interview during which the doctor asks questions (such as about itching and scratching), and a physical examination. Athlete's foot can usually be diagnosed by visual inspection of the skin and by identifying less obvious symptoms such as itching of the affected area. If the diagnosis is uncertain, direct microscopy of a potassium hydroxide preparation of a skin scraping (known as a KOH test) can confirm the diagnosis of athlete's foot and help rule out other possible causes, such as candidiasis, pitted keratolysis, erythrasma, contact dermatitis, eczema, or psoriasis. Dermatophytes known to cause athlete's foot will demonstrate multiple septate branching hyphae on microscopy. A Wood's lamp (black light), although useful in diagnosing fungal infections of the scalp (tinea capitis), is not usually helpful in diagnosing athlete's foot, since the common dermatophytes that cause this disease do not fluoresce under ultraviolet light. Prevention There are several preventive foot hygiene measures that can prevent athlete's foot and reduce recurrence. Some of these include: keeping the feet dry; clipping toenails short; using a separate nail clipper for infected toenails; using socks made from well-ventilated cotton or synthetic moisture wicking materials (to soak moisture away from the skin to help keep it dry); avoiding tight-fitting footwear; changing socks frequently; and wearing sandals while walking through communal areas such as gym showers and locker rooms. According to the Centers for Disease Control and Prevention, "Nails should be clipped short and kept clean. Nails can house and spread the infection." Recurrence of athlete's foot can be prevented with the use of antifungal powder on the feet. The funguses (molds) that cause athlete's foot require warmth and moisture to survive and grow. There is an increased risk of infection with exposure to warm, moist environments (e.g., occlusive footwear—shoes or boots that enclose the feet) and in shared humid environments such as communal showers, shared pools, and treatment tubs. Chlorine bleach is a disinfectant and common household cleaner that kills mold. Cleaning surfaces with a chlorine bleach solution prevents the disease from spreading from subsequent contact. Cleaning bathtubs, showers, bathroom floors, sinks, and counters with bleach helps prevent the spread of the disease, including reinfection. Keeping socks and shoes clean (using bleach in the wash) is one way to prevent funguses from taking hold and spreading. Avoiding the sharing of boots and shoes is another way to prevent transmission. Athlete's foot can be transmitted by sharing footwear with an infected person. Not sharing also applies to towels, because, though less common, funguses can be passed along on towels, especially damp ones. Treatment Athlete's foot resolves without medication in 30 to 40% of cases. Topical antifungal medication consistently produces much higher rates of cure. Conventional treatment typically involves thoroughly washing the feet daily or twice daily, followed by the application of a topical medication. Because the outer skin layers are damaged and susceptible to reinfection, topical treatment generally continues until all layers of the skin are replaced, about 2 to 6 weeks after symptoms disappear. Keeping feet dry and practicing good hygiene (as described in the above section on prevention) is crucial for killing the fungus and preventing reinfection. Treating the feet is not always enough. Once socks or shoes are infested with funguses, wearing them again can reinfect (or further infect) the feet. Socks can be effectively cleaned in the wash by adding bleach or by washing in water . To be effective, treatment includes all infected areas (such as toenails, hands, torso, etc.). Otherwise, the infection may continue to spread, including back to treated areas. For example, leaving fungal infection of the nail untreated may allow it to spread back to the rest of the foot, to become athlete's foot once again. Systematic review found that allylamines such as terbinafine are not more efficacious than azoles for the treatment of athlete's foot. Severe or prolonged fungal skin infections may require treatment with oral antifungal medication. Topical treatments There are many topical antifungal drugs useful in the treatment of athlete's foot including: miconazole nitrate, clotrimazole, tolnaftate (a synthetic thiocarbamate), terbinafine hydrochloride, butenafine hydrochloride and undecylenic acid. The fungal infection may be treated with topical antifungal agents, which can take the form of a spray, powder, cream, or gel. Topical application of an antifungal cream such as butenafine once daily for one week or terbinafine once daily for two weeks is effective in most cases of athlete's foot and is more effective than application of miconazole or clotrimazole. Plantar-type athlete's foot is more resistant to topical treatments due to the presence of thickened hyperkeratotic skin on the sole of the foot. Keratolytic and humectant medications such as urea, salicylic acid (Whitfield's ointment), and lactic acid are useful adjunct medications and improve penetration of antifungal agents into the thickened skin. Topical glucocorticoids are sometimes prescribed to alleviate inflammation and itching associated with the infection. A solution of 1% potassium permanganate dissolved in hot water is an alternative to antifungal drugs. Potassium permanganate is a salt and a strong oxidizing agent. Oral treatments For severe or refractory cases of athlete's foot oral terbinafine is more effective than griseofulvin. Fluconazole or itraconazole may also be taken orally for severe athlete's foot infections. The most commonly reported adverse effect from these medications is gastrointestinal upset. Epidemiology Globally, fungal infections affect about 15% of the population and 20% of adults. Additionally, 70% of the population will experience athlete's foot at some point in life. Athlete's foot is common in individuals who wear unventilated (occlusive) footwear, such as rubber boots or vinyl shoes. Upon exposure to an athlete's foot-causing fungus, the moist conditions generated from poor foot ventilation promotes growth of the fungus on the foot or between the toes. Occupationally, studies have shown increased prevalence of athlete's foot among miners, soldiers, and athletes. Likewise, activities such as marathon running have seen increased prevalence of athlete's foot. Countries and regions where going barefoot is more common experience much lower rates of athlete's foot than do populations which habitually wear shoes; as a result, the disease has been called "a penalty of civilization". Studies have demonstrated that men are infected 2 to 4 times more often than women. Cases of athlete's foot were first documented around 1916 during World War I, where infection among soldiers was common. By 1928 it was estimated that nearly ten million Americans with cases of athlete's foot; the alarming prevalence of the disease caused for public health concern. In the following year, an epidemiologic study was conducted on incoming freshman to the University of California; it was found that 53% of incoming freshman men had athlete's foot and by year's end that number had risen to 78%. Prevalence of the disease increased in the 1930s, specifically among individuals of higher socioeconomic status; these individuals had more access to common shared spaced such as pools, colleges, and athletic clubs where transmission of athlete's foot-causing fungus was common. Prevalence in the United States was high enough to call for the use of sterilizing footbaths in the 1932 Olympics in Los Angeles. It was at this time public health officials adopted the idea that athletes foot was a product of modernity and that dealing with this disease was "a penalty of civilization" as many treatments proved ineffective. Antifungal properties of compounds such as undecylenic acid were studied in the 1940s; products containing zinc undecylenate were shown to be the most effective topical treatment for curing the condition. The use of orally ingested Griseofulvin was shown in the 1960s to be effective in acute cases of athlete's foot. Likewise, recorded incidence of athletes foot decreased among American soldiers in Vietnam who were given Griseofulvins as a preventative drug. In the 1990s, research supported the use of itraconazole and the Allylamine known as terbinafine as drugs effective at eliminating athlete's foot and also dermatophyte infections on other parts of the body. As of 2012, research has shown that terbinafine is 2.26 times as likely to cure athlete's foot than treatment with Griseofulvin; comparative studies between itraconazole and terbinafine have shown little difference in effectiveness.
Biology and health sciences
Fungal infections
Health
6737633
https://en.wikipedia.org/wiki/Disc%20herniation
Disc herniation
A disc herniation or spinal disc herniation is an injury to the intervertebral disc between two vertebrae, usually caused by excessive strain or trauma to the spine. It may result in back pain, pain or sensation in different parts of the body, and physical disability. The most conclusive diagnostic tool for disc herniation is MRI, and treatments may range from painkillers to surgery. Protection from disc herniation is best provided by core strength and an awareness of body mechanics including good posture. When a tear in the outer, fibrous ring of an intervertebral disc allows the soft, central portion to bulge out beyond the damaged outer rings, the disc is said to be herniated. Disc herniation is frequently associated with age-related degeneration of the outer ring, known as the annulus fibrosus, but is normally triggered by trauma or straining by lifting or twisting. Tears are almost always posterolateral (on the back sides) owing to relative narrowness of the posterior longitudinal ligament relative to the anterior longitudinal ligament. A tear in the disc ring may result in the release of chemicals causing inflammation, which can result in severe pain even in the absence of nerve root compression. Disc herniation is normally a further development of a previously existing disc protrusion, in which the outermost layers of the annulus fibrosus are still intact, but can bulge when the disc is under pressure. In contrast to a herniation, none of the central portion escapes beyond the outer layers. Most minor herniations heal within several weeks. Anti-inflammatory treatments for pain associated with disc herniation, protrusion, bulge, or disc tear are generally effective. Severe herniations may not heal of their own accord and may require surgery. The condition may be referred to as a slipped disc, but this term is not accurate as the spinal discs are firmly attached between the vertebrae and cannot "slip" out of place. Signs and symptoms Typically, symptoms are experienced on one side of the body only. Symptoms of a herniated disc can vary depending on the location of the herniation and the types of soft tissue involved. They can range from little or no pain, if the disc is the only tissue injured, to severe and unrelenting neck pain or low back pain that radiates into regions served by nerve roots which have been irritated or impinged by the herniated material. Often, herniated discs are not diagnosed immediately, as patients present with undefined pains in the thighs, knees, or feet. Symptoms may include sensory changes such as numbness, tingling, paresthesia, and motor changes such as muscular weakness, paralysis, and affection of reflexes. If the herniated disc is in the lumbar region, the patient may also experience sciatica due to irritation of one of the nerve roots of the sciatic nerve. Unlike a pulsating pain or pain that comes and goes, which can be caused by muscle spasm, pain from a herniated disc is usually continuous or at least continuous in a specific position of the body. It is possible to have a herniated disc without pain or noticeable symptoms if the extruded nucleus pulposus material doesn't press on soft tissues or nerves. A small-sample study examining the cervical spine in symptom-free volunteers found focal disc protrusions in 50% of participants, suggesting that a considerable part of the population might have focal herniated discs in their cervical region that do not cause noticeable symptoms. A herniated disc in the lumbar spine may cause radiating nerve pain in the lower extremities or groin area and may sometimes be associated with bowel or bladder incontinence. Typically, symptoms are experienced only on one side of the body, but if a herniation is very large and presses on the nerves on both sides within the spinal column or the cauda equina, both sides of the body may be affected, often with serious consequences. Compression of the cauda equina can cause permanent nerve damage or paralysis which can result in loss of bowel and bladder control and sexual dysfunction. This disorder is called cauda equina syndrome. Other complications include chronic pain. Cause When the spine is straight, such as in standing or lying down, internal pressure is equalized on all parts of the discs. While sitting or bending to lift, internal pressure on a disc can move from (lying down) to over (lifting with a rounded back). Herniation of the contents of the disc into the spinal canal often occurs when the anterior side (stomach side) of the disc is compressed while sitting or bending forward, and the contents (nucleus pulposus) get pressed against the tightly stretched and thinned membrane (annulus fibrosus) on the posterior side (back side) of the disc. The combination of membrane-thinning from stretching and increased internal pressure () can result in the rupture of the confining membrane. The jelly-like contents of the disc then move into the spinal canal, pressing against the spinal nerves, which may produce intense and potentially disabling pain and other symptoms. Some authors favour degeneration of the intervertebral disc as the major cause of spinal disc herniation and cite trauma as a minor cause. Disc degeneration occurs both in degenerative disc disease and aging. With degeneration, the disc components – the nucleus pulposus and annulus fibrosus – become exposed to altered loads. Specifically, the nucleus becomes fibrous and stiff and less able to bear load. Excess load is transferred to the annulus, which may then develop fissures as a result. If the fissures reach the periphery of the annulus, the nuclear material can pass through as a disc herniation. Mutations in several genes have been implicated in intervertebral disc degeneration. Probable candidate genes include type I collagen (sp1 site), type IX collagen, vitamin D receptor, aggrecan, asporin, MMP3, interleukin-1, and interleukin-6 polymorphisms. Mutation in genes – such as MMP2 and THBS2 – that encode for proteins and enzymes involved in the regulation of the extracellular matrix has been shown to contribute to lumbar disc herniation. Disc herniations can result from general wear and tear, such as weightlifting training, constant sitting or squatting, driving, or a sedentary lifestyle. Herniations can also result from the lifting of heavy loads. Professional athletes, especially those playing contact sports, such as American football, Rugby, ice hockey, and wrestling, are known to be prone to disc herniations as well as some limited contact sports that require repetitive flexion and compression such as soccer, baseball, basketball, and volleyball. Within athletic contexts, herniation is often the result of sudden blunt impacts against, or abrupt bending or torsional movements of, the lower back. Pathophysiology The majority of disc herniations occur in the lumbar spine (95% at L4–L5 or L5–S1). The second most common site is the cervical region (C5–C6, C6–C7). The thoracic region accounts for only 1–2% of cases. Herniations usually occur postero-laterally, at the points where the annulus fibrosus is relatively thin and is not reinforced by the posterior or anterior longitudinal ligament. In the cervical spine, a symptomatic postero-lateral herniation between two vertebrae will impinge on the nerve which exits the spinal canal between those two vertebrae on that side. So, for example, a right postero-lateral herniation of the disc between vertebrae C5 and C6 will impinge on the right C6 spinal nerve. The rest of the spinal cord, however, is oriented differently, so a symptomatic postero-lateral herniation between two vertebrae will impinge on the nerve exiting at the next intervertebral level down. Lumbar disc herniation Lumbar disc herniations occur in the back, most often between the fourth and fifth lumbar vertebral bodies or between the fifth and the sacrum. Here, symptoms can be felt in the lower back, buttocks, thigh, anal/genital region (via the perineal nerve), and may radiate into the foot and/or toe. The sciatic nerve is the most commonly affected nerve, causing symptoms of sciatica. The femoral nerve can also be affected and cause the patient to experience a numb, tingling feeling throughout one or both legs and even feet or a burning feeling in the hips and legs. A herniation in the lumbar region often compresses the nerve root exiting at the level below the disc. Thus, a herniation of the L4–5 disc compresses the L5 nerve root, only if the herniation is posterolateral. Cervical disc herniation Cervical disc herniations occur in the neck, most often between the fifth and sixth (C5–6) and the sixth and seventh (C6–7) cervical vertebral bodies. There is an increased susceptibility amongst older (60+) patients to herniations higher in the neck, especially at C3–4. Symptoms of cervical herniations may be felt in the back of the skull, the neck, shoulder girdle, scapula, arm, and hand. The nerves of the cervical plexus and brachial plexus can be affected. Intradural disc herniation Intradural disc herniation occurs when the disc material crosses the dura mater and enters the thecal sac. It is a rare form of disc herniation with an incidence of 0.2–2.2%. Pre-operative imaging can be helpful for diagnosis, but intra-operative findings are required for confirmation. Inflammation It is increasingly recognized that back pain resulting from disc herniation is not always due solely to nerve compression of the spinal cord or nerve roots, but may also be caused by chemical inflammation. There is evidence that points to a specific inflammatory mediator in back pain: an inflammatory molecule, called tumor necrosis factor alpha (TNF), is released not only by a herniated disc, but also in cases of disc tear (annulus tear) by facet joints, and in spinal stenosis. In addition to causing pain and inflammation, TNF may contribute to disc degeneration. Diagnosis Terminology Terms commonly used to describe the condition include herniated disc, prolapsed disc, ruptured disc, and slipped disc. Other conditions that are closely related include disc protrusion, radiculopathy (pinched nerve), sciatica, disc disease, disc degeneration, degenerative disc disease, and black disc (a totally degenerated spinal disc). The popular term slipped disc is a misnomer, as the intervertebral discs are tightly sandwiched between two vertebrae to which they are attached, and cannot actually "slip", or even get out of place. The disc is actually grown together with the adjacent vertebrae and can be squeezed, stretched and twisted, all in small degrees. It can also be torn, ripped, herniated, and degenerated, but it cannot "slip". Some authors consider that the term slipped disc is harmful, as it leads to an incorrect idea of what has occurred and thus of the likely outcome. However, during growth, one vertebral body can slip relative to an adjacent vertebral body, a deformity called spondylolisthesis. Spinal disc herniation is known in Latin as prolapsus disci intervertebralis. Click images to see larger versions Physical examination Diagnosis of spinal disc herniation is made by a practitioner on the basis of a patient's history and symptoms, and by physical examination. During an evaluation, tests may be performed to confirm or rule out other possible causes with similar symptoms – spondylolisthesis, degeneration, tumors, metastases and space-occupying lesions, for instance – as well as to evaluate the efficacy of potential treatment options. Straight leg raise The straight leg raise is often used as a preliminary test for possible disc herniation in the lumbar region. A variation is to lift the leg while the patient is sitting. However, this reduces the sensitivity of the test. A Cochrane review published in 2010 found that individual diagnostic tests including the straight leg raising test, absence of tendon reflexes, or muscle weakness were not very accurate when conducted in isolation. Spinal imaging Projectional radiography (X-ray imaging). Traditional plain X-rays are limited in their ability to image soft tissues such as discs, muscles, and nerves, but they are still used to confirm or exclude other possibilities such as tumors, infections, fractures, etc. In spite of their limitations, X-rays play a relatively inexpensive role in confirming the suspicion of the presence of a herniated disc. If a suspicion is thus strengthened, other methods may be used to provide final confirmation. Computed tomographyscan is the most sensitive imaging modality to examine the bony structures of the spine. CT imaging allows for the evaluation of calcified herniated discs or any pathological process that may result in bone loss or destruction. It is deficient for the visualization of nerve roots, making it unsuitable in the diagnoses of radiculopathy. Magnetic resonance imaging is the gold standard study for confirming a suspected LDH. With a diagnostic accuracy of 97%, it is the most sensitive study to visualize a herniated disc due to its significant ability in soft tissue visualization. MRI also has higher inter-observer reliability than other imaging modalities. It suggests disc herniation when it shows an increased T2-weighted signal at the posterior 10% of the disc. Degenerative disc diseases have shown a correlation with Modic type 1 changes. When evaluating for postoperative lumbar radiculopathies, the recommendation is that the MRI is performed with contrast unless otherwise contraindicated. MRI is more effective than CT in distinguishing inflammatory, malignant, or inflammatory etiologies of LDH. It is indicated relatively early in the course of evaluation (<8 weeks) when the patient presents with relative indications like significant pain, neurological motor deficits, and cauda equina syndrome. Diffusion tensor imaging is a type of MRI sequence used for detecting microstructural changes in the nerve root. It may be beneficial in understanding the changes that occur after herniated lumbar disc compresses a nerve root, and might help in differentiating the patients that need surgical intervention. In patients with a high suspicion of radiculopathy due to lumbar disc herniation, yet the MRI is equivocal or negative, nerve conduction studies are indicated. T2-weighted images allow for clear visualization of protruded disc material in the spinal canal. Myelography. An X-ray of the spinal canal following injection of a contrast material into the surrounding cerebrospinal fluid spaces will reveal displacement of the contrast material. It can show the presence of structures that can cause pressure on the spinal cord or nerves, such as herniated discs, tumors, or bone spurs. Because myelography involves the injection of foreign substances, MRI scans are now preferred for most patients. Myelograms still provide excellent outlines of space-occupying lesions, especially when combined with CT scanning (CT myelography). CT myelography is the imaging modality of choice to visualize herniated discs in patients with contraindications for an MRI. However, due to its invasiveness, the assistance of a trained radiologist is required. Myelography is associated with risks like post-spinal headache, meningeal infection, and radiation exposure. Recent advances with a multidetector CT scan have made the diagnostic level of it nearly equal to the MRI. The presence and severity of myelopathy can be evaluated by means of transcranial magnetic stimulation (TMS), a neurophysiological method that measures the time required for a neural impulse to cross the pyramidal tracts, starting from the cerebral cortex and ending at the anterior horn cells of the cervical, thoracic, or lumbar spinal cord. This measurement is called the central conduction time (CCT). TMS can aid physicians to: determine if myelopathy exists identify the level of the spinal cord where myelopathy is located. This is especially useful in cases where more than two lesions may be responsible for the clinical symptoms and signs, such as in patients with two or more cervical disc hernias assess the progression of myelopathy with time, for example before and after cervical spine surgery TMS can also help in the differential diagnosis of different causes of pyramidal tract damage. Electromyography and nerve conduction studies (EMG/NCS) measure the electrical impulses along nerve roots, peripheral nerves, and muscle tissue. Tests can indicate if there is ongoing nerve damage, if the nerves are in a state of healing from a past injury, or if there is another site of nerve compression. EMG/NCS studies are typically used to pinpoint the sources of nerve dysfunction distal to the spine. Differential diagnosis Tests may be required to distinguish spinal disc herniations from other conditions with similar symptoms. Discogenic pain Mechanical pain Myofascial pain Abscess Aortic dissection Discitis or osteomyelitis Hematoma Mass lesion or malignancy Benign tumor like neurinoma or meningeoma Myocardial infarction Sacroiliac joint dysfunction Spinal stenosis Spondylosis or spondylolisthesis Treatment In the majority of cases spinal disc herniation can be treated successfully conservatively, without surgical removal of the herniated material. Sciatica is a set of symptoms associated with lumbar disc herniation. A study on sciatica showed that about one-third of patients with sciatica recover within two weeks after presentation using conservative measures alone, and about three-quarters of patients recovered after three months of conservative treatment. However, sciatica may also less commonly be caused by nerve compression by muscles, tumors and other masses, and the study did not indicate the number of individuals with sciatica that had disc herniations. Initial treatment usually consists of nonsteroidal anti-inflammatory drugs (NSAIDs), but long-term use of NSAIDs for people with persistent back pain is complicated by their possible cardiovascular and gastrointestinal toxicity. Epidural corticosteroid injections provide a slight and questionable short-term improvement for those with sciatica, but are of no long-term benefit. Complications occur in up to 17% of cases when injections are performed on the neck, though most are minor. In 2014, the US Food and Drug Administration (FDA) suggested that the "injection of corticosteroids into the epidural space of the spine may result in rare but serious adverse events, including loss of vision, stroke, paralysis, and death", and that "the effectiveness and safety of epidural administration of corticosteroids have not been established, and FDA has not approved corticosteroids for this use". Lumbar disc herniation (LDH) Non-surgical methods of treatment are usually attempted first. Pain medications may be prescribed to alleviate acute pain and allow the patient to begin exercising and stretching. There are a number of non-surgical methods used in attempts to relieve the condition. They are considered indicated, contraindicated, relatively contraindicated, or inconclusive, depending on the safety profile of their risk–benefit ratio and on whether they may or may not help: Indicated Education on proper body mechanics Physical therapy to address mechanical factors, and may include modalities to temporarily relieve pain (i.e. traction, electrical stimulation, massage) Nonsteroidal anti-inflammatory drugs (NSAIDs) Weight control Spinal manipulation. Moderate quality evidence suggests that spinal manipulation is more effective than placebo for the treatment of acute (less than 3 months duration) lumbar disc herniation and acute sciatica. The same study also found "low to very low" evidence for its usefulness in treating chronic lumbar symptoms (more than 3 months) and "the quality of evidence for ... cervical spine-related extremity symptoms of any duration is low or very low". A 2006 review of published research states that spinal manipulation "is likely to be safe when used by appropriately-trained practitioners", and research currently suggests that spinal manipulation is safe for the treatment of disc-related pain. Contraindicated Spinal manipulation is contraindicated when the etiology of the herniation is the result of a Motor Vehicle Collision (MVC) Spinal manipulation is contraindicated for disc herniations when there are progressive neurological deficits such as with cauda equina syndrome. A review of non-surgical spinal decompression found shortcomings in most published studies and concluded that there was only "very limited evidence in the scientific literature to support the effectiveness of non-surgical spinal decompression therapy". Its use and marketing have been very controversial. Surgery Surgery may be useful when a herniated disc is causing significant pain radiating into the leg, significant leg weakness, bladder problems, or loss of bowel control. Discectomy (the partial removal of a disc that is causing leg pain) can provide pain relief sooner than non-surgical treatments. Small endoscopic discectomy (called nano-endoscopic discectomy) is non-invasive and does not cause failed back syndrome. Invasive microdiscectomy with a one-inch skin opening has not been shown to result in a significantly different outcome from larger-opening discectomy with respect to pain. It might however have less risk of infection. Failed back syndrome is a significant, potentially disabling, result that can arise following invasive spine surgery to treat disc herniation. Smaller spine procedures such as endoscopic transforaminal lumbar discectomy cannot cause failed back syndrome, because no bone is removed. The presence of cauda equina syndrome (in which there is incontinence, weakness, and genital numbness) is considered a medical emergency requiring immediate attention and possibly surgical decompression. When different forms of surgical treatments including (discetomy, microdiscectomy, and chemonucleolysis) were compared evidence was suggestive rather than conclusive. A Cochrane review from 2007 reported: "surgical discectomy for carefully selected patients with sciatica due to a prolapsed lumbar disc appears to provide faster relief from the acute attack than non‐surgical management. However, any positive or negative effects on the lifetime natural history of the underlying disc disease are unclear. Microdiscectomy gives broadly comparable results to standard discectomy. There is insufficient evidence on other surgical techniques to draw firm conclusions." Regarding the role of surgery for failed medical therapy in people without a significant neurological deficit, a Cochrane review concluded that "limited evidence is now available to support some aspects of surgical practice". Following surgery, rehabilitation programs are often implemented. There is wide variation in what these programs entail. A Cochrane review found low- to very low-quality evidence that patients who participated in high-intensity exercise programs had slightly less short term pain and disability compared to low-intensity exercise programs. There was no difference between supervised and home exercise programs. Epidemiology Disc herniation can occur in any disc in the spine, but the two most common forms are lumbar disc herniation and cervical disc herniation. The former is the most common, causing low back pain (lumbago) and often leg pain as well, in which case it is commonly referred to as sciatica. Lumbar disc herniation occurs 15 times more often than cervical (neck) disc herniation, and it is one of the most common causes of low back pain. The cervical discs are affected 8% of the time and the upper-to-mid-back (thoracic) discs only 1–2% of the time. The following locations have no discs and are therefore exempt from the risk of disc herniation: the upper two cervical intervertebral spaces, the sacrum, and the coccyx. Most disc herniations occur when a person is in their thirties or forties when the nucleus pulposus is still a gelatin-like substance. With age the nucleus pulposus changes ("dries out") and the risk of herniation is greatly reduced. After age 50 or 60, osteoarthritic degeneration (spondylosis) or spinal stenosis are more likely causes of low back pain or leg pain. 4.8% of males and 2.5% of females older than 35 experience sciatica during their lifetime. Of all individuals, 60% to 80% experience back pain during their lifetime. In 14%, pain lasts more than two weeks. Generally, males have a slightly higher incidence than females. Prevention Because there are various causes of back injuries, prevention must be comprehensive. Back injuries are predominant in manual labor, so the majority of low back pain prevention methods have been applied primarily toward biomechanics. Prevention must come from multiple sources such as education, proper body mechanics, and physical fitness. Education Education should emphasize not lifting beyond one's capabilities and giving the body a rest after strenuous effort. Over time, poor posture can cause the intervertebral disc to tear or become damaged. Striving to maintain proper posture and body alignment will aid in preventing disc degradation. Exercise Exercises that enhance back strength may also be used to prevent back injuries. Back exercises include the prone push-ups/press-ups, upper back extension, transverse abdominis bracing, and floor bridges. If pain is present in the back, it can mean that the stabilization muscles of the back are weak and a person needs to train the trunk musculature. Other preventative measures are to lose weight and not to work oneself past fatigue. Signs of fatigue include shaking, poor coordination, muscle burning, and loss of the transverse abdominal brace. Heavy lifting should be done with the legs performing the work, and not the back. Swimming is a common tool used in strength training. The usage of lumbar-sacral support belts may restrict movement at the spine and support the back during lifting. Research Future treatments may include stem cell therapy.
Biology and health sciences
Types
Health
2817032
https://en.wikipedia.org/wiki/Disposable%20product
Disposable product
A disposable (also called disposable product) is a product designed for a single use after which it is recycled or is disposed as solid waste. The term is also sometimes used for products that may last several months (e.g. disposable air filters) to distinguish from similar products that last indefinitely (e.g. washable air filters). The word "disposables" is not to be confused with the word "consumables", which is widely used in the mechanical world. For example, welders consider welding rods, tips, nozzles, gas, etc. to be "consumables", as they last only a certain amount of time before needing to be replaced. Consumables are needed for a process to take place, such as inks for printing and welding rods for welding, while disposable products are items that can be discarded after they become damaged or are no longer useful. Terminology "Disposable" is an adjective that describes something as non-reusable but is disposed of after use. Many people now use the term as a noun or substantive, i.e. "a disposable" but in reality this is still an adjective as the noun (product, nappy, etc.) is implied. The UK government included an enquiry about how best to define "single-use plastics" in its 2018 consultation document on "tackling the plastic problem". Materials Disposable products are most often made from paper, plastic, cotton, or polystyrene foam. Products made from composite materials such as laminations are difficult to recycle and are more likely to be disposed of at the end of their use. They are typically disposed of using landfills because it is a cheap option. However, in 2004, the European Union passed a law which stopped allowing disposals in landfills. Single-use plastics Many governments are scaling up their efforts to phase out single-use plastic products and packaging and to manage plastic packaging waste in an environmentally sound manner. In 2015 the European Union (EU) adopted a directive requiring a reduction in the consumption of single use plastic bags per person to 90 by 2019 and to 40 by 2025. In April 2019, the EU adopted a further directive banning almost all types of single use plastic, except bottles, from the beginning of the year 2021. In the UK, a 2018 HM Treasury consultation on single-use plastic waste taxation noted that the production process for single-use plastic originates in the conversion of naturally occurring substances into polymers, which vary in their capacity for being re-processed on one or more occasions, meaning that some polymers can be reprocessed and reused only once, and others cannot at present be reprocessed in an economic manner and are therefore destined to have only a single use. The sale of single-use plastic cutlery, balloon sticks and polystyrene cups and food containers was banned in England from 1 October 2023, following an announcement on "some of the most polluting single-use plastic items" published in January 2023. At the same time, restrictions have been introduced concerning the supply of single-use plastic plates, trays and bowls. The EU's Single-Use Plastic Directive (SUPD, Directive EU 2019/904) went into effect in EU member states on 3 July 2021. Also in 2021, Australia's Minderoo Foundation produced a report called the "Plastic Waste Makers Index", which concluded that half of the world's single-use plastic waste is produced by just 20 companies. China is the biggest consumer of single-use plastics. Examples of disposables Kitchen and dining products Aluminum foil and aluminum pans Disposable dishware / drinkware (e.g. plates, bowls, cups) Plastic cutlery (e.g. spoons, knives, forks, sporks) Disposable table cloths Cupcake wrappers, coffee filters are compostable Drinking straws Wet wipe Packaging Packages are usually intended for a single use. The waste hierarchy calls for minimization of materials. Many packages and materials are suited to recycling, although the actual recycling percentages are relatively low in many regions. For example, in Chile, only 1% of plastic is recycled. Reuse and repurposing of packaging is increasing, but eventually containers will be recycled, composted, incinerated, or landfilled. There are many container forms such as boxes, bottles, jars, bags, etc. Materials used include paper, plastics, metals, fabrics, composites, etc. A number of countries have adopted legislation to ensure that plastic packaging waste collected from households is sorted, reprocessed, compounded, and reused or recycled. There are also bans on single-use plastic food packaging in many countries. Food service industry disposables In 2002, Taiwan began taking action to reduce the use of disposable tableware at institutions and businesses, and to reduce the use of plastic bags. Yearly, the nation of 17.7 million people was producing 59,000 tons of disposable tableware waste and 105,000 tons of waste plastic bags, and increasing measures have been taken in the years since then to reduce the amount of waste. In 2013 Taiwan's Environmental Protection Administration (EPA) banned outright the use of disposable tableware in the nation's 968 schools, government agencies and hospitals. The ban is expected to eliminate 2,600 metric tons of waste yearly. In Germany, Austria, and Switzerland, laws banning use of disposable food and drink containers at large-scale events have been enacted. Such a ban has been in place in Munich, Germany, since 1991, applying to all city facilities and events. This includes events of all sizes, including very large ones (Christmas market, Auer-Dult Faire, Oktoberfest and Munich Marathon). For small events of a few hundred people, the city has arranged for a corporation offer rental of crockery and dishwasher equipment. In part through this regulation, Munich reduced the waste generated by Oktoberfest, which attracts tens of thousands of people, from 11,000 metric tons in 1990 to 550 tons in 1999. China produces about 57 billion pairs of single-use chopsticks yearly, of which half are exported. About 45 percent are made from trees – about 3.8 million of them – mainly cotton wood, birch, and spruce, the remainder being made from bamboo. Japan uses about 24 billion pairs of these disposables per year, and globally the use is about 80 billion pairs are thrown away by about 1.4 million people. Reusable chopsticks in restaurants have a lifespan of 130 meals. In Japan, with disposable ones costing about 2 cents and reusable ones costing typically $1.17, the reusables better at the $2.60 breakeven cost. Campaigns in several countries to reduce this waste are beginning to have some effect. Israel is considered the world's largest user of disposables food containers and dinnerware. Each month, 250 million plastic cups and more than 12 million paper cups are used, manufactured and disposed. In Israel there are no laws about manufacturing or importing of food disposable containers. A kulhar is a traditional handle-less clay cup from South Asia that is typically unpainted and unglazed, and meant to be disposable. Since kulhars are made by firing in a kiln and are almost never reused, they are inherently sterile and hygienic. Bazaars and food stalls in the Indian subcontinent traditionally served hot beverages, such as tea, in kuhlars, which suffused the beverage with an "earthy aroma" that was often considered appealing. Yoghurt, hot milk with sugar as well as some regional desserts, such as kulfi (traditional ice-cream), are also served in kulhars. Kulhars have gradually given way to polystyrene and coated paper cups, because the latter are lighter to carry in bulk and cheaper.⁠⁠ Medical and hygiene products Medical and surgical device manufacturers worldwide produce a multitude of items that are intended for one use only. The primary reason is infection control; when an item is used only once it cannot transmit infectious agents to subsequent patients. Manufacturers of any type of medical device are obliged to abide by numerous standards and regulations. ISO 15223: Medical Devices and EN 980 cite that single use instruments or devices be labelled as such on their packaging with a universally recognized symbol to denote "do not re-use", "single use", or "use only once". This symbol is the numeral 2, within a circle with a 45° line through it. Examples of single use medical and hygiene items include: Hypodermic needles Toilet paper Disposable towels, paper towels Condoms and other contraception products Disposable enemas and similar products Cotton swabs and pads Medical and cleaning gloves Medical dust respirators (dust masks) Baby and adult diapers, and training pants Shaving razors, safety razors, waxing kits, combs, and other hair control products Toothbrushes, dental floss, and other oral care products Hospital aprons Disposable panties in postpartum Contact lenses, although reusable contact lenses are also available. Electronics Non-rechargeable batteries are considered hazardous waste and should only be disposed of as such. Disposable ink cartridges Disposable cameras Disposable electronic cigarette devices, coils, cartridges, tanks/pods Defense and law enforcement PlastiCuffs Ammunition Barricade tape Other consumer products Garbage bags Vacuum cleaner bags, water, air, coolant, and other filters Ballpoint pens, erasers, and other writing implements Movie sets and theater sets Gift wrapping paper Labels, stickers, and the associated release liners are single use and usually disposed after use. Cigarettes and cigars, plus cigarette packets, filters and rolling paper. Gasoline Natural gas Paper products; toilet paper, paper napkin, Newspapers. etc
Technology
Industry: General
null
2817855
https://en.wikipedia.org/wiki/Larmor%20formula
Larmor formula
In electrodynamics, the Larmor formula is used to calculate the total power radiated by a nonrelativistic point charge as it accelerates. It was first derived by J. J. Larmor in 1897, in the context of the wave theory of light. When any charged particle (such as an electron, a proton, or an ion) accelerates, energy is radiated in the form of electromagnetic waves. For a particle whose velocity is small relative to the speed of light (i.e., nonrelativistic), the total power that the particle radiates (when considered as a point charge) can be calculated by the Larmor formula: where or is the proper acceleration, is the charge, and is the speed of light. A relativistic generalization is given by the Liénard–Wiechert potentials. In either unit system, the power radiated by a single electron can be expressed in terms of the classical electron radius and electron mass as: One implication is that an electron orbiting around a nucleus, as in the Bohr model, should lose energy, fall to the nucleus and the atom should collapse. This puzzle was not solved until quantum theory was introduced. Derivation To calculate the power radiated by a point charge at a position , with a velocity, we integrate the Poynting vector over the surface of a sphere of radius R, to get: The electric and magnetic fields are given by the Liénard-Wiechert field equations, The radius vector, , is the distance from the charged particle's position at the retarded time to the point of observation of the electromagnetic fields at the present time, is the charge's velocity divided by , is the charge's acceleration divided by , and . The variables, , , , and are all evaluated at the retarded time, . We make a Lorentz transformation to the rest frame of the point charge where , and Here, is the rest frame acceleration parallel to , and is the rest frame acceleration perpendicular to . We integrate the rest frame Poynting vector over the surface of a sphere of radius R', to get. We take the limit In this limit, , and so the electric field is given by with all variables evaluated at the present time. Then, the surface integral for the radiated power reduces to The radiated power can be put back in terms of the original acceleration in the moving frame, to give The variables in this equation are in the original moving frame, but the rate of energy emission on the left hand side of the equation is still given in terms of the rest frame variables. However, the right-hand side will be shown below to be a Lorentz invariant, so radiated power can be Lorentz transformed to the moving frame, finally giving This result (in two forms) is the same as Liénard's relativistic extension of Larmor's formula, and is given here with all variables at the present time. Its nonrelativistic reduction reduces to Larmor's original formula. For high-energies, it appears that the power radiated for acceleration parallel to the velocity is a factor larger than that for perpendicular acceleration. However, writing the Liénard formula in terms of the velocity gives a misleading implication. In terms of momentum instead of velocity, the Liénard formula becomes This shows that the power emitted for perpendicular to the velocity is larger by a factor of than the power for parallel to the velocity. This results in radiation damping being negligible for linear accelerators, but a limiting factor for circular accelerators. Covariant form The radiated power is actually a Lorentz scalar, given in covariant form as To show this, we reduce the four-vector scalar product to vector notation. We start with The time derivatives are. When these derivatives are used, we get With this expression for the scalar product, the manifestly invariant form for the power agrees with the vector form above, demonstrating that the radiated power is a Lorentz scalar Angular distribution The angular distribution of radiated power is given by a general formula, applicable whether or not the particle is relativistic. In CGS units, this formula is where is a unit vector pointing from the particle towards the observer. In the case of linear motion (velocity parallel to acceleration), this simplifies to where is the angle between the observer and the particle's motion. Radiation reaction The radiation from a charged particle carries energy and momentum. In order to satisfy energy and momentum conservation, the charged particle must experience a recoil at the time of emission. The radiation must exert an additional force on the charged particle. This force is known as Abraham-Lorentz force while its non-relativistic limit is known as the Lorentz self-force and relativistic forms are known as Lorentz-Dirac force or Abraham-Lorentz-Dirac force. The radiation reaction phenomenon is one of the key problems and consequences of the Larmor formula. According to classical electrodynamics, a charged particle produces electromagnetic radiation as it accelerates. The particle loses momentum and energy as a result of the radiation, which is carrying it away from it. The radiation response force, on the other hand, also acts on the charged particle as a result of the radiation. The dynamics of charged particles are significantly impacted by the existence of this force. In particular, it causes a change in their motion that may be accounted for by the Larmor formula, a factor in the Lorentz-Dirac equation. According to the Lorentz-Dirac equation, a charged particle's velocity will be influenced by a "self-force" resulting from its own radiation. Such non-physical behavior as runaway solutions, when the particle's velocity or energy become infinite in a finite amount of time, might result from this self-force. A resolution to the paradoxes resulting from the introduction of a self-force due to the emission of electromagnetic radiation, is that there is no self-force produced. The acceleration of a charged particle produces electromagnetic radiation, whose outgoing energy reduces the energy of the charged particle. This results in 'radiation reaction' that decreases the acceleration of the charged particle, not as a self force, but just as less acceleration of the particle. Atomic physics The invention of quantum physics, notably the Bohr model of the atom, was able to explain this gap between the classical prediction and the actual reality. The Bohr model proposed that transitions between distinct energy levels, which electrons could only inhabit, might account for the observed spectral lines of atoms. The wave-like properties of electrons and the idea of energy quantization were used to explain the stability of these electron orbits. The Larmor formula can only be used for non-relativistic particles, which limits its usefulness. The Liénard-Wiechert potential is a more comprehensive formula that must be employed for particles travelling at relativistic speeds. In certain situations, more intricate calculations including numerical techniques or perturbation theory could be necessary to precisely compute the radiation the charged particle emits.
Physical sciences
Electromagnetic radiation
Physics
2819553
https://en.wikipedia.org/wiki/Hyaenodon
Hyaenodon
Hyaenodon ("hyena-tooth") is an extinct genus of carnivorous placental mammals from extinct tribe Hyaenodontini within extinct subfamily Hyaenodontinae (in extinct family Hyaenodontidae), that lived in Eurasia and North America from the middle Eocene, throughout the Oligocene, to the early Miocene. Description Typical of early carnivorous mammals, individuals of Hyaenodon had a very massive skull, but a small brain. The skull is long with a narrow snout—much larger in relation to the length of the skull than in canine carnivores, for instance. The neck was shorter than the skull, while the body was long and robust and terminated in a long tail. Compared to the larger (but not closely related) Hyainailouros, the dentition of Hyaenodon was geared more towards shearing meat and less towards bone crushing. Some species of this genus were among the largest terrestrial carnivorous mammals of their time; others were only of the size of a marten. Remains of many species are known from North America, Europe, and Asia. The average weight of adult or subadult H. horridus, the largest North American species, is estimated to about and may not have exceeded . H. gigas, the largest Hyaenodon species, was much larger, being and around . H. crucians from the early Oligocene of North America is estimated to only . H. microdon and H. mustelinus from the late Eocene of North America were even smaller and weighed probably about . Tooth eruption Studies on juvenile Hyaenodon specimens show that the animal had a very unusual system of tooth replacement. Juveniles took about 3–4 years to complete the final stage of eruption, implying a long adolescent phase. In North American forms, the first upper premolar erupts before the first upper molar, while European forms show an earlier eruption of the first upper molar. Paleoecology The various species of Hyaenodon competed with each other and with other hyaenodont genera (including Sinopa, Dissopsalis and Hyainailurus), and played important roles as predators in ecological communities as late as the Miocene in Asia and preyed on a variety of prey species such as primitive horses like Mesohippus, Brontotheres, early camels, oreodonts and even primitive rhinos. Species of Hyaenodon have been shown to have successfully preyed on other large carnivores of their time, including a nimravid ("false sabertooth cat"), according to analysis of tooth puncture marks on a fossil Dinictis skull found in North Dakota. Zigzag Hunter–Schreger bands in the enamel indicate that bone was a significant component of the diet of Hyaenodon. In North America the last Hyaenodon, in the form of H. brevirostrus, disappeared in the late Oligocene. In Europe, they had already vanished earlier in the Oligocene. Classification and phylogeny Taxonomy Tribe: †Hyaenodontini Genus: †Hyaenodon †Hyaenodon brachyrhynchus †Hyaenodon chunkhtensis †Hyaenodon dubius †Hyaenodon eminus †Hyaenodon exiguus †Hyaenodon filholi †Hyaenodon gervaisi †Hyaenodon heberti (Europe, 41–33.9 mya) †Hyaenodon leptorhynchus †Hyaenodon minor †Hyaenodon neimongoliensis †Hyaenodon pervagus †Hyaenodon pumilus †Hyaenodon requieni †Hyaenodon rossignoli †Hyaenodon weilini (China, 23–16.9 mya) †Hyaenodon yuanchuensis Subgenus: †Neohyaenodon (paraphyletic subgenus) †Hyaenodon gigas †Hyaenodon horridus †Hyaenodon incertus †Hyaenodon macrocephalus †Hyaenodon megaloides †Hyaenodon milvinus †Hyaenodon mongoliensis †Hyaenodon montanus †Hyaenodon vetus Subgenus: †Protohyaenodon (paraphyletic subgenus) †Hyaenodon brevirostrus †Hyaenodon crucians †Hyaenodon microdon †Hyaenodon mustelinus (North America, 38–30 mya) †Hyaenodon raineyi †Hyaenodon venturae
Biology and health sciences
Mammals: General
Animals
2820700
https://en.wikipedia.org/wiki/Dysnomia%20%28moon%29
Dysnomia (moon)
Dysnomia (formally (136199) Eris I Dysnomia) is the only known moon of the dwarf planet Eris and is the second-largest known moon of a dwarf planet, after Pluto I Charon. It was discovered in September 2005 by Mike Brown and the Laser Guide Star Adaptive Optics (LGSAO) team at the W. M. Keck Observatory. It carried the provisional designation of until it was officially named Dysnomia (from the Ancient Greek word meaning anarchy/lawlessness) in September 2006, after the daughter of the Greek goddess Eris. With an estimated diameter of , Dysnomia spans 24% to 29% of Eris's diameter. It is significantly less massive than Eris, with a density consistent with it being mainly composed of ice. In stark contrast to Eris's highly-reflective icy surface, Dysnomia has a very dark surface that reflects 5% of incoming visible light, resembling typical trans-Neptunian objects around Dysnomia's size. These physical properties indicate Dysnomia likely formed from a large impact on Eris, in a similar manner to other binary dwarf planet systems like Pluto and , and the Earth–Moon system. Discovery In 2005, the adaptive optics team at the Keck telescopes in Hawaii carried out observations of the four brightest Kuiper belt objects (Pluto, , , and ), using the newly commissioned laser guide star adaptive optics system. Observations taken on 10 September 2005, revealed a moon in orbit around Eris, provisionally designated . In keeping with the Xena nickname that was already in use for Eris, the moon was nicknamed "Gabrielle" by its discoverers, after Xena's sidekick. Physical characteristics Submillimeter-wavelength observations of the Eris–Dysnomia system's thermal emissions by the Atacama Large Millimeter Array (ALMA) in 2015 first showed that Dysnomia had a large diameter and a very low albedo, with the initial estimate being . Further observations by ALMA in 2018 refined Dysnomia's diameter to (24% to 29% of Eris's diameter) and an albedo of . Of the known moons of dwarf planets, only Charon is larger, making Dysnomia the second-largest moon of a dwarf planet. Dysnomia's low albedo significantly contrasts with Eris's extremely high albedo of 0.96; its surface has been described to be darker than coal, which is a typical characteristic seen in trans-Neptunian objects around Dysnomia's size. Eris and Dysnomia are mutually tidally locked, like Pluto and Charon. Astrometric observations of the Eris–Dysnomia system by ALMA show that Dysnomia does not induce detectable barycentric wobbling in Eris's position, implying its mass must be less than (mass ratio ). This is below the estimated mass range of (mass ratio 0.01–0.03) that would normally allow Eris to be tidally locked within the range of the Solar System, suggesting that Eris must therefore be unusually dissipative. ALMA's upper-limit mass estimate for Dysnomia corresponds to an upper-limit density of , implying a mostly icy composition. The shape of Dysnomia is not known, but its low density suggests that it should not be in hydrostatic equilibrium. The brightness difference between Dysnomia and Eris decreases with longer and redder wavelengths; Hubble Space Telescope observations show that Dysnomia is 500 times fainter than Eris (6.70-magnitude difference) in visible light, whereas near-infrared Keck telescope observations show that Dysnomia is ~60 times fainter (4.43-magnitude difference) than Eris. This indicates Dysnomia has a very different spectrum and redder color than Eris, indicating a significantly darker surface, something that has been proven by submillimeter observations. Orbit Combining Keck and Hubble observations, the orbit of Dysnomia was used to determine the mass of Eris through Kepler's third law of planetary motion. Dysnomia's average orbital distance from Eris is approximately , with a calculated orbital period of 15.786 days, or approximately half a month. This shows that the mass of Eris is 1.27 times that of Pluto. Extensive observations by Hubble indicate that Dysnomia has a nearly circular orbit around Eris, with a low orbital eccentricity of . Over the course of Dysnomia's orbit, its distance from Eris varies by due to its slightly eccentric orbit. Dynamical simulations of Dysnomia suggest that its orbit should have completely circularized through mutual tidal interactions with Eris within timescales of 5–17 million years, regardless of the moon's density. A non-zero eccentricity would thus mean that Dysnomia's orbit is being perturbed, possibly due to the presence of an additional inner satellite of Eris. However, it is possible that the measured eccentricity is not real, but due to interference of the measurements by albedo features, or systematic errors. From Hubble observations from 2005 to 2018, the inclination of Dysnomia's orbit with respect to Eris's heliocentric orbit is calculated to be approximately 78°. Since the inclination is less than 90°, Dysnomia's orbit is therefore prograde relative to Eris's orbit. In 2239, Eris and Dysnomia will enter a period of mutual events in which Dysnomia's orbital plane is aligned edge-on to the Sun, allowing for Eris and Dysnomia to take turns eclipsing each other. Formation Eight of the ten largest trans-Neptunian objects are known to have at least one satellite. Among the fainter members of the trans-Neptunian population, only about 10% are known to have satellites. This is thought to imply that collisions between large KBOs have been frequent in the past. Impacts between bodies of the order of across would throw off large amounts of material that would coalesce into a moon. A similar mechanism is thought to have led to the formation of the Moon when Earth was struck by a giant impactor (see Giant impact hypothesis) early in the history of the Solar System. Name Mike Brown, the moon's discoverer, chose the name Dysnomia for the moon. As the daughter of Eris, the mythological Dysnomia fit the established pattern of naming moons after gods associated with the primary body (hence, Jupiter's largest moons are named after lovers of Jupiter, while Saturn's are named after his fellow Titans). The English translation of Dysnomia, "lawlessness", also echoes Lucy Lawless, the actress who played Xena in Xena: Warrior Princess on television. Before receiving their official names, Eris and Dysnomia had been nicknamed "Xena" and "Gabrielle", though Brown states that the connection was accidental. A primary reason for the name was its similarity to the name of Brown's wife, Diane, following a pattern established with Pluto. Pluto owes its name in part to its first two letters, which form the initials of Percival Lowell, the founder of the observatory where its discoverer, Clyde Tombaugh, was working, and the person who inspired the search for "Planet X". James Christy, who discovered Charon, did something similar by adding the Greek ending -on to Char, the nickname of his wife Charlene. (Christy wasn't aware that the resulting 'Charon' was a figure in Greek mythology.) "Dysnomia", similarly, has the same first letter as Brown's wife, Diane.
Physical sciences
Solar System
Astronomy
3783315
https://en.wikipedia.org/wiki/Virus%20latency
Virus latency
Virus latency (or viral latency) is the ability of a pathogenic virus to lie dormant (latent) within a cell, denoted as the lysogenic part of the viral life cycle. A latent viral infection is a type of persistent viral infection which is distinguished from a chronic viral infection. Latency is the phase in certain viruses' life cycles in which, after initial infection, proliferation of virus particles ceases. However, the viral genome is not eradicated. The virus can reactivate and begin producing large amounts of viral progeny (the lytic part of the viral life cycle) without the host becoming reinfected by new outside virus, and stays within the host indefinitely. Virus latency is not to be confused with clinical latency during the incubation period when a virus is not dormant. Mechanisms Episomal latency Episomal latency refers to the use of genetic episomes during latency. In this latency type, viral genes are stabilized, floating in the cytoplasm or nucleus as distinct objects, either as linear or structures. Episomal latency is more vulnerable to ribozymes or host foreign gene degradation than proviral latency (see below). Herpesviridae One example is the herpes virus family, Herpesviridae, all of which establish latent infection. Herpes virus include chicken-pox virus and herpes simplex viruses (HSV-1, HSV-2), all of which establish episomal latency in neurons and leave linear genetic material floating in the cytoplasm. Epstein-Barr virus The Gammaherpesvirinae subfamily is associated with episomal latency established in cells of the immune system, such as B-cells in the case of Epstein–Barr virus. Epstein–Barr virus lytic reactivation (which can be due to chemotherapy or radiation) can result in genome instability and cancer. Herpes simplex virus In the case of herpes simplex (HSV), the virus has been shown to fuse with DNA in neurons, such as nerve ganglia or neurons, and HSV reactivates upon even minor chromatin loosening with stress, although the chromatin compacts (becomes latent) upon oxygen and nutrient deprivation. Cytomegalovirus Cytomegalovirus (CMV) establishes latency in myeloid progenitor cells, and is reactivated by inflammation. Immunosuppression and critical illness (sepsis in particular) often results in CMV reactivation. CMV reactivation is commonly seen in patients with severe colitis. Advantages and disadvantages Advantages of episomal latency include the fact that the virus may not need to enter the cell nucleus, and hence may avoid nuclear domain 10 (ND10) from activating interferon via that pathway. Disadvantages include more exposure to cellular defenses, leading to possible degradation of viral gene via cellular enzymes. Reactivation Reactivation may be due to stress, UV light, etc. Proviral latency A provirus is a virus genome that is integrated into the DNA of a host cell. Advantages and disadvantages Advantages include automatic host cell division results in replication of the virus's genes, and the fact that it is nearly impossible to remove an integrated provirus from an infected cell without killing the cell. A disadvantage of this method is the need to enter the nucleus (and the need for packaging proteins that will allow for that). However, viruses that integrate into the host cell's genome can stay there as long as the cell lives. HIV One of the best-studied viruses that exhibits viral latency is HIV. HIV uses reverse transcriptase to create a DNA copy of its RNA genome. HIV latency allows the virus to largely avoid the immune system. Like other viruses that go latent, it does not typically cause symptoms while latent. HIV in proviral latency is nearly impossible to target with antiretroviral drugs. Several classes of latency reversing agents (LRAs) are under development for possible use in shock-and-kill strategies in which the latently infected cellular reservoirs would be reactivated (the shock) so that anti-viral treatment could take effect (the kill). Maintaining latency Both proviral and episomal latency may require maintenance for continued infection and fidelity of viral genes. Latency is generally maintained by viral genes expressed primarily during latency. Expression of these latency-associated genes may function to keep the viral genome from being digested by cellular ribozymes or being found out by the immune system. Certain viral gene products (RNA transcripts such as non-coding RNAs and proteins) may also inhibit apoptosis or induce cell growth and division to allow more copies of the infected cell to be produced. An example of such a gene product is the latency associated transcripts (LAT) in herpes simplex virus, which interfere with apoptosis by downregulating a number of host factors, including major histocompatibility complex (MHC) and inhibiting the apoptotic pathway. A certain type of latency could be ascribed to the endogenous retroviruses. These viruses have incorporated into the human genome in the distant past, and are now transmitted through reproduction. Generally these types of viruses have become highly evolved, and have lost the expression of many gene products. Some of the proteins expressed by these viruses have co-evolved with host cells to play important roles in normal processes. Ramifications While viral latency exhibits no active viral shedding nor causes any pathologies or symptoms, the virus is still able to reactivate via external activators (sunlight, stress, etc.) to cause an acute infection. In the case of herpes simplex virus, which generally infects an individual for life, a serotype of the virus reactivates occasionally to cause cold sores. Although the sores are quickly resolved by the immune system, they may be a minor annoyance from time to time. In the case of varicella zoster virus, after an initial acute infection (chickenpox) the virus lies dormant until reactivated as herpes zoster. More serious ramifications of a latent infection could be the possibility of transforming the cell, and forcing the cell into uncontrolled cell division. This is a result of the random insertion of the viral genome into the host's own gene and expression of host cellular growth factors for the benefit of the virus. In a notable event, this actually happened during gene therapy through the use of retroviral vectors at the Necker Hospital in Paris, where twenty young boys received treatment for a genetic disorder, after which five developed leukemia-like syndromes. Human papilloma virus This is also seen with infections of the human papilloma virus in which persistent infection may lead to cervical cancer as a result of cellular transformation. HIV In the field of HIV research, proviral latency in specific long-lived cell types is the basis for the concept of one or more viral reservoirs, referring to locations (cell types or tissues) characterized by persistence of latent virus. Specifically, the presence of replication-competent HIV in resting CD4-positive T cells allows this virus to persist for years without evolving despite prolonged exposure to antiretroviral drugs. This latent reservoir of HIV may explain the inability of antiretroviral treatment to cure HIV infection.
Biology and health sciences
Concepts
Health
1387527
https://en.wikipedia.org/wiki/Shake%20%28unit%29
Shake (unit)
A shake is an informal metric unit of time equal to 10 nanoseconds, or 10−8 seconds. It was originally coined for use in nuclear physics, helping to conveniently express the timing of various events in a nuclear reaction. Etymology Like many informal units having to do with nuclear physics, it arose from top secret operations of the Manhattan Project during World War II. The word "shake" was taken from the idiomatic expression "in two shakes of a lamb's tail", which indicates a very short time interval. The phrase "a couple of shakes," in reference to the measurement of time, may have been popularized by Richard Barham's Ingoldsby Legends (1840); however, the phrase was already part of vernacular language long before that. Nuclear physics For nuclear-bomb designers, the term was a convenient name for the short interval, rounded to 10 nanoseconds, which was frequently seen in their measurements and calculations: The typical time required for one step in a chain reaction (i.e. the typical time for each neutron to cause a fission event, which releases more neutrons) is of the order of 1 shake, and a chain reaction is typically complete by 50 to 100 shakes.
Physical sciences
Time
Basics and measurement
1387753
https://en.wikipedia.org/wiki/Microsporidia
Microsporidia
Microsporidia are a group of spore-forming unicellular parasites. These spores contain an extrusion apparatus that has a coiled polar tube ending in an anchoring disc at the apical part of the spore. They were once considered protozoans or protists, but are now known to be fungi, or a sister group to true fungi. These fungal microbes are obligate eukaryotic parasites that use a unique mechanism to infect host cells. They have recently been discovered in a 2017 Cornell study to infect Coleoptera on a large scale. So far, about 1500 of the probably more than one million species are named. Microsporidia are restricted to animal hosts, and all major groups of animals host microsporidia. Most infect insects, but they are also responsible for common diseases of crustaceans and fish. The named species of microsporidia usually infect one host species or a group of closely related taxa. Approximately 10 percent of the known species are parasites of vertebrates — several species, most of which are opportunistic, can infect humans, in whom they can cause microsporidiosis. After infection they influence their hosts in various ways and all organs and tissues are invaded, though generally by different species of specialised microsporidia. Some species are lethal, and a few are used in biological control of insect pests. Parasitic castration, gigantism, or change of host sex are all potential effects of microsporidian parasitism (in insects). In the most advanced cases of parasitism the microsporidium rules the host cell completely and controls its metabolism and reproduction, forming a xenoma. Replication takes place within the host's cells, which are infected by means of unicellular spores. These vary from 1–40 μm, making them some of the smallest eukaryotes. Microsporidia that infect mammals are 1.0–4.0 μm. They also have the smallest eukaryotic genomes. The terms "microsporidium" (pl. "microsporidia") and "microsporidian" are used as vernacular names for members of the group. The name Microsporidium Balbiani, 1884 is also used as a catchall genus for incertae sedis members. Morphology Microsporidia lack mitochondria, instead possessing mitosomes. They also lack motile structures, such as flagella. Microsporidia produce highly resistant spores, capable of surviving outside their host for up to several years. Spore morphology is useful in distinguishing between different species. Spores of most species are oval or pyriform, but rod-shaped or spherical spores are not unusual. A few genera produce spores of unique shape for the genus. The spore is protected by a wall, consisting of three layers: an outer electron-dense exospore a median, wide and seemingly structureless endospore, containing chitin a thin internal plasma membrane In most cases there are two closely associated nuclei, forming a diplokaryon, but sometimes there is only one. The anterior half of the spore contains a harpoon-like apparatus with a long, thread-like polar filament, which is coiled up in the posterior half of the spore. The anterior part of the polar filament is surrounded by a polaroplast, a lamella of membranes. Behind the polar filament, there is a posterior vacuole. Infection In the gut of the host the spore germinates; it builds up osmotic pressure until its rigid wall ruptures at its thinnest point at the apex. The posterior vacuole swells, forcing the polar filament to rapidly eject the infectious content into the cytoplasm of the potential host. Simultaneously the material of the filament is rearranged to form a tube which functions as a hypodermic needle and penetrates the gut epithelium. Once inside the host cell, a sporoplasm grows, dividing or forming a multinucleate plasmodium, before producing new spores. The life cycle varies considerably. Some have a simple asexual life cycle, while others have a complex life cycle involving multiple hosts and both asexual and sexual reproduction. Different types of spores may be produced at different stages, probably with different functions including autoinfection (transmission within a single host). Medical implications In animals and humans, microsporidia often cause chronic, debilitating diseases rather than lethal infections. Effects on the host include reduced longevity, fertility, weight, and general vigor. Vertical transmission of microsporidia is frequently reported. In the case of insect hosts, vertical transmission often occurs as transovarial transmission, where the microsporidian parasites pass from the ovaries of the female host into eggs and eventually multiply in the infected larvae. Amblyospora salinaria n. sp. which infects the mosquito Culex salinarius Coquillett, and Amblyospora californica which infects the mosquito Culex tarsalis Coquillett, provide typical examples of transovarial transmission of microsporidia. Microsporidia, specifically the mosquito-infecting Vavraia culicis, are being explored as a possible 'evolution-proof' malaria-control method. Microsporidian infection of Anopheles gambiae (the principal vector of Plasmodium falciparum malaria) reduces malarial infection within the mosquito, and shortens the mosquito lifespan. As the majority of malaria-infected mosquitoes naturally die before the malaria parasite is mature enough to transmit, any increase in mosquito mortality through microsporidian-infection may reduce malaria transmission to humans. In May 2020, researchers reported that Microsporidia MB, a symbiont in the midgut and ovaries of An. arabiensis, significantly impaired transmission of P. falciparum, had "no overt effect" on the fitness of host mosquitoes, and was transmitted vertically (through inheritance). Clinical Microsporidian infections of humans sometimes cause a disease called microsporidiosis. At least 14 microsporidian species, spread across eight genera, have been recognized as human pathogens. These include Trachipleistophora hominis. As hyperparasites Microsporidia can infect a variety of hosts, including hosts which are themselves parasites. In that case, the microsporidian species is a hyperparasite, i.e. a parasite of a parasite. As an example, more than eighteen species are known which parasitize digeneans (parasitic flatworms). These digeneans are themselves parasites in various vertebrates and molluscs. Eight of these species belong to the genus Nosema. Similarly, the microsporidian species Toguebayea baccigeri is a parasite of a digenean, the faustulid Bacciger israelensis, itself an intestinal parasite of a marine fish, the bogue Boops boops (Teleostei, Sparidae). Genomes Microsporidia have the smallest known (nuclear) eukaryotic genomes. The parasitic lifestyle of microsporidia has led to a loss of many mitochondrial and Golgi genes, and even their ribosomal RNAs are reduced in size compared with those of most eukaryotes. As a consequence, the genomes of microsporidia are much smaller than those of other eukaryotes. Currently known microsporidial genomes are 2.5 to 11.6 Mb in size, encoding from 1,848 to 3,266 proteins which is in the same range as many bacteria. Horizontal gene transfer (HGT) seems to have occurred many times in microsporidia. For instance, the genomes of Encephalitozoon romaleae and Trachipleistophora hominis contain genes that derive from animals and bacteria, and some even from fungi. DNA repair The Rad9-Rad1-Hus1 protein complex (also known as the 9-1-1 complex) in eukaryotes is recruited to sites of DNA damage where it is considered to help trigger the checkpoint-signaling cascade. Genes that code for heterotrimeric 9-1-1 are present in microsporidia. In addition to the 9-1-1 complex, other components of the DNA repair machinery are also present indicting that repair of DNA damage likely occurs in microsporidia. Phylogeny Phylogeny of Rozellomycota Classification The first described microsporidian genus, Nosema, was initially put by Nägeli in the fungal group Schizomycetes together with some bacteria and yeasts. For some time microsporidia were considered as very primitive eukaryotes, placed in the protozoan group Cnidospora. Later, especially because of the lack of mitochondria, they were placed along with the other Protozoa such as diplomonads, parabasalids and archamoebae in the protozoan-group Archezoa. More recent research has falsified this theory of early origin (for all of these). Instead, microsporidia are proposed to be highly developed and specialized organisms, which just dispensed functions that are needed no longer, because they are supplied by the host. Furthermore, spore-forming organisms in general do have a complex system of reproduction, both sexual and asexual, which look far from primitive. Since the mid-2000s microsporidia are placed within the Fungi or as a sister-group of the Fungi with a common ancestor. Work to identify clades is largely based on habitat and host. Three classes of Microsporidia are proposed by Vossbrinck and Debrunner-Vossbrinck, based on the habitat: Aquasporidia, Marinosporidia and Terresporidia. A second classification by Cavalier-Smith 1993: Subphyla Rudimicrospora Cavalier-Smith 1993 Class Minisporea Cavalier-Smith 1993 Order Minisporida Sprague, 1972 Class Metchnikovellea Weiser, 1977 Order Metchnikovellida Vivier, 1975 Subphyla Polaroplasta Cavalier-Smith 1993 Class Pleistophoridea Cavalier-Smith 1993 Order Pleistophorida Stempell 1906 Class Disporea Cavalier-Smith 1993 Subclass Unikaryotia Cavalier-Smith 1993 Subclass Diplokaryotia Cavalier-Smith 1993
Biology and health sciences
Basics
Plants
1388407
https://en.wikipedia.org/wiki/Brassica%20oleracea
Brassica oleracea
Brassica oleracea is a plant of the family Brassicaceae, also known as wild cabbage in its uncultivated form. The species evidently originated from feral populations of related plants in the Eastern Mediterranean, where it was most likely first cultivated. It has many common cultivars used as vegetables, including cabbage, broccoli, cauliflower, kale, Brussels sprouts, collard greens, Savoy cabbage, kohlrabi, and gai lan. Description Wild B. oleracea is a tall biennial or perennial plant that forms a stout rosette of large leaves in the first year. The grayish-green leaves are fleshy and thick, helping the plant store water and nutrients in difficult environments. In its second year, a woody spike grows up to tall, from which branch off stems with long clusters of yellow four-petaled flowers. Taxonomy Origins According to the Triangle of U theory, B. oleracea is very closely related to five other species of the genus Brassica. A 2021 study suggests that Brassica cretica, native to the Eastern Mediterranean, particularly Greece and the Aegean Islands, was the closest living relative of cultivated B. oleracea, thus supporting the view that its cultivation originated in the Eastern Mediterranean region, with later admixture from other Brassica species. Genetic analysis of nine wild populations on the French Atlantic coast indicated their common feral origin, deriving from domesticated plants escaped from fields and gardens. The cultivars of B. oleracea are grouped by developmental form into several major cultivar groups, of which the Acephala ("non-heading") group remains most like the natural wild cabbage in appearance. Etymology 'Brassica' was Pliny the Elder's name for several cabbage-like plants. Its specific epithet oleracea means "vegetable/herbal" in Latin and is a form of (). Distribution and habitat Although rarely abundant, wild cabbage is found on the coasts of Britain, France, Spain, and Italy. Wild cabbage is a hardy plant with high tolerance for salt and lime. Its intolerance of competition from other plants typically restricts its natural occurrence to limestone sea cliffs, like the chalk cliffs on both sides of the English Channel. Cultivation B. oleracea has become established as an important human food crop plant, used because of its large food reserves, which are stored over the winter in its leaves. It has been bred into a wide range of cultivars, including cabbage, broccoli, cauliflower, brussels sprouts, collards, and kale, some of which are hardly recognizable as being members of the same genus, let alone species. The historical genus of Crucifera, meaning "cross-bearing" in reference to the four-petaled flowers, may be the only unifying feature beyond taste. Researchers believe it has been cultivated for several thousand years, but its history as a domesticated plant is not clear before Greek and Roman times, when it was a well-established garden vegetable. Theophrastus mentions three kinds of (ῤάφανος): a curly-leaved, a smooth-leaved, and a wild-type. He reports the antipathy of the cabbage and the grape vine, for the ancients believed cabbages grown near grapes would impart their flavour to the wine. History Through artificial selection for various phenotype traits, the emergence of variations of the plant with drastic differences in appearance occurred over centuries. Preference for leaves, terminal buds, lateral buds, stems, and inflorescences resulted in selection of varieties of wild cabbage into the many forms known today. The wild plant (and its ancestors) originated in the eastern Mediterranean region of Europe. Estimated from Sanskrit writings 4,000 years ago, as well as Greek writings from the sixth century BC, plant cultivation may have occurred. Impact of preference The preference for eating the leaves led to the selection of plants with larger leaves being harvested and their seeds planted for the next growth. Around the fifth century BC, the formation of what is now known as kale had developed. Preference led to further artificial selection of kale plants with more tightly bunched leaves or terminal buds. Around the first century AD, the phenotype variation of B. oleracea known as cabbage emerged. Phenotype selection preferences in Germany resulted in a new variation from the kale cultivar. By selecting for wider stems, the variant plant known as kohlrabi emerged around the first century AD. European preference emerged for eating immature buds, selecting for inflorescence. Early records in 15th century AD, indicate that early cauliflower and broccoli heading types were found throughout southern Italy and Sicily, although these types may not have been resolved into distinct cultivars until about 100 years later. Further selection in Belgium in lateral bud led to Brussels sprouts in the 18th century. Cultivar groups According to the Royal Botanic Gardens (Kew Species Profiles) the species has eight cultivar groups. Each cultivar group has many cultivars, like 'Lacinato' kale or 'Belstar' broccoli. Acephala: non-heading cultivars (kale, collards, ornamental cabbage, ornamental kale, flowering kale, tree cabbage). Alboglabra: Asian Cuisine cultivars (Chinese kale, Chinese broccoli, gai lan, kai lan). Botrytis: cultivars that form compact inflorescences (broccoli, cauliflower, broccoflower, calabrese broccoli, romanesco broccoli). Capitata: cabbage and cabbage-like cultivars (cabbage, savoy cabbage, red cabbage). Gemmifera: bud-producing cultivars (sprouts, Brussels sprouts) Gongylodes: turnip-like cultivars (kohlrabi, knol-kohl) Italica: sprouts (purple sprouting broccoli, sprouting broccoli). Edible inflorescences not compacted into a single head. Tronchuda: low-growing annuals with spreading leaves (Portuguese cabbage, seakale cabbage). A 2024 study compares 704 B. oleracea sequences and establishes a phylogenetic tree of cultivars. The authors find large-scale changes in gene expression and gene presence. Some genes are putatively linked to certain traits such as arrested inflorescence (typical of cauliflower and broccoli). Uses Human genetics in relation to taste The TAS2R38 gene encodes a G protein-coupled receptor that functions as a taste receptor, mediated by ligands such as propylthiouracil (PTU) and phenylthiocarbamide that bind to the receptor and initiate signaling that confers various degrees of taste perception. Bitter taste receptors in the TS2R family are also found in gut mucosal and pancreatic cells in humans and rodents. These receptors influence release of hormones involved in appetite regulation, such as peptide YY and glucagon-like peptide-1, and therefore may influence caloric intake and the development of obesity. Thus, bitter taste perception may affect dietary behaviors by influencing both taste preferences and metabolic hormonal regulation. Three variants in the TAS2R38 gene – rs713598, rs1726866, and rs10246939 – are in high linkage disequilibrium in most populations and result in amino acid coding changes that lead to a range of bitter taste perception phenotypes. The PAV haplotype is dominant; therefore, individuals with at least one copy of the PAV allele perceive molecules in vegetables that resemble PTU as tasting bitter, and consequently may develop an aversion to bitter vegetables. In contrast, individuals with two AVI haplotypes are bitter non-tasters. PAV and AVI haplotypes are the most common, though other haplotypes exist that confer intermediate bitter taste sensitivity (AAI, AAV, AVV, and PVI). This taste aversion may apply to vegetables in general.
Biology and health sciences
Brassicales
Plants
1389714
https://en.wikipedia.org/wiki/Man-portable%20air-defense%20system
Man-portable air-defense system
Man-portable air-defense systems (MANPADS or MPADS) are portable shoulder-launched surface-to-air missiles. They are guided weapons and are a threat to low-flying aircraft, especially helicopters and also used against low-flying cruise missiles. These short-range missiles can also be fired from vehicles, tripods, weapon platforms, and warships. Overview MANPADS were developed in the 1950s to provide military ground forces with protection from jet aircraft. They have received a great deal of attention, partly because armed terrorist groups have used them against commercial airliners. These missiles, affordable and widely available through a variety of sources, have been used successfully over the past three decades, both in military conflicts, by militant groups, and by terrorist organizations. Twenty-five countries, including China, Iran, Poland, Russia, Sweden, the United Kingdom and the United States produce man-portable air defense systems. Possession, export, and trafficking of such weapons is tightly controlled, due to the threat they pose to civil aviation, although such efforts have not always been successful. The missiles are about in length and weigh about , depending on the model. MANPADS generally have a target detection range of about and an engagement range of about , so aircraft flying at or higher are relatively safe. Missile types Infrared Infrared homing missiles are designed to home-in on a heat source on an aircraft, typically the engine exhaust plume, and detonate a warhead in or near the heat source to disable the aircraft or to simply burst it into flames. These missiles use passive guidance, meaning that they do not emit heat signatures, making them difficult to detect by aircraft employing countermeasure systems. First generation The first missiles deployed in the 1960s were infrared missiles. First generation MANPADS, such as the US Redeye, early versions of the Soviet 9K32 Strela-2, and the Chinese HN-5 (A copy of the Soviet Strela-2), are considered "tail-chase weapons" as their uncooled spin-scan seekers can only discern the superheated interior of the target's jet engine from background noise. This means they are only capable of accurately tracking the aircraft from the rear when the engines are fully exposed to the missile's seeker and provide a sufficient thermal signature for engagement. First generation IR missiles are also highly susceptible to interfering thermal signatures from background sources, including the sun, which many experts feel makes them somewhat unreliable, and they are prone to erratic behaviour in the terminal phase of engagement. While less effective than more modern weapons, they remain common in irregular forces as they are not limited by the short shelf-life of gas coolant cartridges used by later systems. Second generation Second generation infrared missiles, such as early versions of the U.S. Stinger, the Soviet Strela-3, and the Chinese FN-6, use gas-cooled seeker heads and a conical scanning technique, which enables the seeker to filter out most interfering background IR sources as well as permitting head-on and side engagement profiles. Later versions of the FIM-43 Redeye are regarded as straddling the first and second generations as they are gas-cooled but still use a spin-scan seeker. Third generation Third generation infrared MANPADS, such as the French Mistral, the Soviet 9K38 Igla, and the US Stinger B, use rosette scanning detectors to produce a quasi-image of the target. Their seeker compares input from multiple detections bands, either two widely separated IR bands or IR and UV, giving them much greater ability to discern and reject countermeasures deployed by the target aircraft. Fourth generation Fourth generation missiles, such as the canceled American FIM-92 Stinger Block 2, Russian Verba, Chinese QW-4, Indian VSHORAD and Japanese Type 91 surface-to-air missile use imaging infrared focal plane array guidance systems and other advanced sensor systems, which permit engagement at greater ranges. Command line-of-sight Command guidance (CLOS) missiles do not home in on a particular aspect (heat source or radio or radar transmissions) of the targeted aircraft. Instead, the missile operator or gunner visually acquires the target using a magnified optical sight and then uses radio controls to "fly" the missile into the aircraft. One of the benefits of such a missile is that it is virtually immune to flares and other basic countermeasure systems that are designed primarily to defeat IR missiles. The major drawback of CLOS missiles is that they require highly trained and skilled operators. Numerous reports from the Soviet–Afghan War in the 1980s cite Afghan mujahedin as being disappointed with the British-supplied Blowpipe CLOS missile because it was too difficult to learn to use and highly inaccurate, particularly when employed against fast-moving jet aircraft. Given these considerations, many experts believe that CLOS missiles are not as ideally suited for untrained personnel use as IR missiles, which sometimes are referred to as "fire and forget" missiles. Later versions of CLOS missiles, such as the British Javelin, use a solid-state television camera in lieu of the optical tracker to make the gunner's task easier. The Javelin's manufacturer, Thales Air Defence, claims that their missile is virtually impervious to countermeasures. Laser guided Laser guided MANPADS use beam-riding guidance where a sensor in the missile's tail detects the emissions from a laser on the launcher and attempts to steer the missile to fly at the exact middle of the beam, or between two beams. Missiles such as Sweden's RBS-70 and Britain's Starstreak can engage aircraft from all angles and only require the operator to continuously track the target using a joystick to keep the laser aim point on the target: the latest version of RBS 70 features a tracking engagement mode where fine aim adjustments of the laser emitter are handled by the launcher itself, with the user only having to make coarse aim corrections. Because there are no radio data links from the ground to the missile, the missile cannot be effectively jammed after it is launched. Even though beam-riding missiles require relatively extensive training and skill to operate, many experts consider these missiles particularly menacing due to the missiles' resistance to most conventional countermeasures in use today. Notable uses Against military aircraft List of Soviet aircraft losses in Afghanistan Argentine air forces in the Falklands War British air forces in the Falklands War. On 27 February 1991, during Operation Desert Storm, an USAF F-16 was shot down by an Igla-1. On 16 April 1994, during Operation Deny Flight a Sea Harrier of the 801 Naval Air Squadron of the Royal Navy, operating from the aircraft carrier HMS Ark Royal, was brought down by an Igla-1. On 30 August 1995, during Operation Deliberate Force, a French Air Force Mirage 2000D was shot down over Bosnia by a heat-seeking 9K38 Igla missile fired by air defense units of Army of Republika Srpska, prompting efforts to obtain improved defensive systems. On 27 May 1999, the Anza Mk-II was used to attack Indian aircraft during the Kargil conflict with India. A MiG-27 of the Indian Air Force was shot down by Pakistan Army Air Defence forces. List of Russian aircraft losses in the Second Chechen War List of Coalition aircraft crashes in Afghanistan List of aviation shootdowns and accidents during the Iraq War 2002 Khankala Mi-26 crash: On 19 August 2002, a Russian-made Igla shoulder-fired surface-to-air missile hit an overloaded Mil Mi-26 helicopter, causing it to crash into a minefield at the main military base at Khankala near the capital city of Grozny, Chechnya. 127 Russian troops and crew were killed. In the 2008 South Ossetia War, Polish made PZR Grom MANPADS were used by Georgia Syrian Civil War On 3 February 2018, a Russian Sukhoi Su-25 piloted by Major Roman Filipov was shot down by a MANPADS over rebel-held territory while conducting airstrikes over Syria's northwestern city of Saraqib. War in Donbas 2022 Russian invasion of Ukraine Against civilian aircraft The 1978 Air Rhodesia Viscount shootdown is the first example of a civilian airliner shot down by a man-portable surface-to-air missile. The pilot of the aircraft managed to make a controlled crash landing. Air Rhodesia Flight 827 was also shot down in February 1979 by the Zimbabwe People's Revolutionary Army with a Strela 2 missile. All 59 passengers and crew were killed. The 1993 Sukhumi airliner attacks involved 5 civilian aircraft shot down within a total of 4 days in Sukhumi, Abkhazia, Georgia, killing 108 people. On 6 April 1994, a surface-to-air missile struck the wing of a Dassault Falcon 50 as it prepared to land in Kigali, Rwanda. A second missile hit its tail. Rwandan president Juvénal Habyarimana and Burundian president Cyprien Ntaryamira were among the nine passengers on board. The plane erupted into flames in mid-air before crashing into the garden of the presidential palace, exploding on impact. This incident was the ignition spark of the Rwandan genocide. 1998 Lionair Flight LN 602 shootdown: On 7 October 1998, the Tamil Tigers shot down an aircraft off the coast of Sri Lanka. 2002 Mombasa airliner attack: On 28 November 2002, two shoulder-launched Strela 2 (SA-7) surface-to-air missiles were fired at a chartered Boeing 757 airliner as it took off from Moi International Airport. The missiles missed the aircraft which continued safely to Tel Aviv, carrying 271 vacationers from Mombasa back to Israel. In photos, the missile systems were painted in light blue, the color used in the Soviet military for training material (a training SA-7 round would not have the guidance system). 2003 Baghdad DHL attempted shootdown incident: On 22 November 2003, an Airbus A300B4-203F cargo plane, operating on behalf of DHL was hit by an SA-14 missile, which resulted in the loss of its hydraulic systems. The crew later landed the crippled aircraft safely by using only differential engine thrust by adjusting the individual throttle controls of each engine. 2007 Mogadishu TransAVIAexport Airlines Il-76 crash: On 23 March 2007, a TransAVIAexport Airlines Ilyushin Il-76 airplane crashed in the outskirts of Mogadishu, Somalia, during the 2007 Battle of Mogadishu. Witnesses claim that a surface-to-air missile was fired immediately prior to the accident. However, Somali officials deny that the aircraft was shot down. Over fifty MANPADS attacks on civilian aircraft are on record to 2007. Thirty-three aircraft were shot down killing over 800 people in the process. Against cruise missiles On 10 October 2022, during the 2022 Russian invasion of Ukraine, Ukrainian forces were recorded allegedly shooting down a Russian cruise missile using MANPADS. Since then, other instances have been videoed and shared on social media platforms. Countermeasures Man-portable air defense systems are a popular black market item for insurgent forces. Their proliferation became the subject of the Wassenaar Arrangement's (WA)22 Elements for Export Controls of MANPADS, the G8 Action Plan of 2 June 2003, the October 2003 Asia-Pacific Economic Cooperation (APEC) Summit, Bangkok Declaration on Partnership for the Future and in July 2003 the Organization for Security and Co-operation in Europe (OSCE), Forum for Security Co-operation, Decision No. 7/03: Man-portable Air Defense Systems. Understanding the problem in 2003, Colin Powell remarked that there was "no threat more serious to aviation" than the missiles, which can be used to shoot down helicopters and commercial airliners, and are sold illegally for as little as a few hundred dollars. The U.S. has led a global effort to dismantle these weapons, with over 30,000 voluntarily destroyed since 2003, but probably thousands are still in the hands of insurgents, especially in Iraq, where they were looted from the military arsenals of the former dictator Saddam Hussein, and in Afghanistan as well. In August 2010, a report by the Federation of American Scientists (FAS) confirmed that "only a handful" of illicit MANPADS were recovered from national resistance caches in Iraq in 2009, according to media reports and interviews with military sources. Military With the growing number of MANPADS attacks on civilian airliners, a number of different countermeasure systems have been developed specifically to protect aircraft against the missiles. AN/ALQ-144, AN/ALQ-147 and AN/ALQ-157 are U.S.-produced systems, developed by Sanders Associates in the 1970s. AN/ALQ-212 ATIRCM, AN/AAQ-24 Nemesis are NATO systems developed by BAE Systems and Northrop Grumman respectively. Civilian Civil Aircraft Missile Protection System (CAMPS)—Developed by Saab Avitronics, Chemring Countermeasures and Naturelink Aviation, using non-pyrotechnic infrared decoy Weapons by manufacturing country China HN-5 HN-6 QW-1 QW-11 QW-11G QW-1A QW-1M QW-2 QW-3 FN-6 QW-1 Vanguard TB-1 France Mistral 1 Mistral 2 Mistral 3 United Kingdom Blowpipe Javelin Starburst Starstreak India MPDMS VSHORAD Iran Misagh-1 Misagh-2 Misagh-3 Qaem Italy New VSHORAD Japan Type 91 (SAM-2, SAM-2B) Pakistan Anza Poland PZR Grom Piorun Romania CA-94: CA-94M Soviet Union/Russian Federation 9K32M 'Strela-2' (SA-7) 9K36 'Strela-3' (SA-14) 9K310 'Igla-M' (SA-16) 9K38 'Igla' (SA-18) 9K338 ' Igla-S' (SA-24) 9K333 'Verba' (SA-25) Sweden RBS 70 RBS 70 NG United States FIM-43 'Redeye' FIM-92 'Stinger' South Korea Chiron North Korea HT-16PGJ Turkey Sungur MANPAD PorSav Black market Although most MANPADS are owned and accounted for by governments, political upheavals and corruption have allowed thousands of them to enter the black market. In the years 1998–2018, at least 72 non-state groups have fielded MANPADS. Civilians in the United States cannot legally own MANPADS.
Technology
Missiles
null
1390921
https://en.wikipedia.org/wiki/Single%20bond
Single bond
In chemistry, a single bond is a chemical bond between two atoms involving two valence electrons. That is, the atoms share one pair of electrons where the bond forms. Therefore, a single bond is a type of covalent bond. When shared, each of the two electrons involved is no longer in the sole possession of the orbital in which it originated. Rather, both of the two electrons spend time in either of the orbitals which overlap in the bonding process. As a Lewis structure, a single bond is denoted as AːA or A-A, for which A represents an element. In the first rendition, each dot represents a shared electron, and in the second rendition, the bar represents both of the electrons shared in the single bond. A covalent bond can also be a double bond or a triple bond. A single bond is weaker than either a double bond or a triple bond. This difference in strength can be explained by examining the component bonds of which each of these types of covalent bonds consists (Moore, Stanitski, and Jurs 393). Usually, a single bond is a sigma bond. An exception is the bond in diboron, which is a pi bond. In contrast, the double bond consists of one sigma bond and one pi bond, and a triple bond consists of one sigma bond and two pi bonds (Moore, Stanitski, and Jurs 396). The number of component bonds is what determines the strength disparity. It stands to reason that the single bond is the weakest of the three because it consists of only a sigma bond, and the double bond or triple bond consist not only of this type of component bond but also at least one additional bond. The single bond has the capacity for rotation, a property not possessed by the double bond or the triple bond. The structure of pi bonds does not allow for rotation (at least not at 298 K), so the double bond and the triple bond which contain pi bonds are held due to this property. The sigma bond is not so restrictive, and the single bond is able to rotate using the sigma bond as the axis of rotation (Moore, Stanitski, and Jurs 396-397). Another property comparison can be made in bond length. Single bonds are the longest of the three types of covalent bonds as interatomic attraction is greater in the two other types, double and triple. The increase in component bonds is the reason for this attraction increase as more electrons are shared between the bonded atoms (Moore, Stanitski, and Jurs 343). Single bonds are often seen in diatomic molecules. Examples of this use of single bonds include H2, F2, and HCl. Single bonds are also seen in molecules made up of more than two atoms. Examples of this use of single bonds include: Both bonds in H2O All 4 bonds in CH4 Single bonding even appears in molecules as complex as hydrocarbons larger than methane. The type of covalent bonding in hydrocarbons is extremely important in the nomenclature of these molecules. Hydrocarbons containing only single bonds are referred to as alkanes (Moore, Stanitski, and Jurs 334). The names of specific molecules which belong to this group end with the suffix -ane. Examples include ethane, 2-methylbutane, and cyclopentane (Moore, Stanitski, and Jurs 335).
Physical sciences
Bonding
Chemistry
1391822
https://en.wikipedia.org/wiki/Repenomamus
Repenomamus
Repenomamus (Latin: "reptile" (reptilis), "mammal" (mammalis)) is a genus of opossum- to badger-sized gobiconodontid mammal containing two species, Repenomamus robustus and Repenomamus giganticus. Both species are known from fossils found in China that date to the early Cretaceous period, about 125-123.2 million years ago. R. robustus is one of several Mesozoic mammals for which there is good evidence that it fed on vertebrates, including dinosaurs. Though it is not entirely clear whether or not these animals primarily hunted live dinosaurs or scavenged dead ones, evidence for the former is present in fossilized remains showcasing the results of what was most likely a predation attempt by R. robustus directed at a specimen of the dinosaur Psittacosaurus lujiatunensis. R. giganticus is among the largest mammals known from the Mesozoic era, only surpassed by Patagomaia. Classification and discovery The fossils were recovered from the lagerstätte of the Yixian Formation in the Liaoning province of China, which is renowned for its extraordinarily well-preserved fossils of feathered dinosaurs. They have been specifically dated to 125–123.2 million years ago, during the Early Cretaceous period. Repenomamus is a genus of eutriconodonts, a group of early mammals with no modern relatives. R. robustus was described by Li, Wang, Wang and Li in 2001, and R. giganticus was described by Hu, Meng, Wang and Li in 2005. The two known species are the sole members of the family Repenomamidae, which was also described in the same paper in 2001. It is sometimes alternatively listed as a member of the family Gobiconodontidae; although this assignment is controversial, a close relationship to this family is well-founded. Description Individuals of the known species in Repenomamus are some of the largest known Mesozoic mammals represented by reasonably complete fossils (though Kollikodon and Patagomaia may be larger, and Schowalteria, Oxlestes, Khuduklestes and Bubodens reached similar if not larger sizes). Adults of R. robustus were the size of a Virginia opossum. It had body length without tail of for complete specimen with estimated skull length of , although there is more partial specimen that had skull. Estimated mass of R. robustus is . The known adult of R. giganticus was about 50% larger than R. robustus, with a body length of and total length over (skull reaching , trunk of and preserved tail in length) and an estimated mass of . These finds extend considerably the known body size range of Mesozoic mammals. In fact, Repenomamus was larger than several small sympatric dromaeosaurid dinosaurs like Graciliraptor. Features of its shoulder and legs bones indicate a sprawling posture as in most of small to medium sized living therian mammals, with plantigrade feet. Unlike therian mammals, Repenomamus had a proportionally longer body with shorter limbs. The dental formula was originally interpreted as , though a more recent study indicates instead that it was . Paleobiology Features of the teeth and jaw suggest that Repenomamus were carnivorous and a specimen of R. robustus discovered with the fragmentary skeleton of a juvenile Psittacosaurus preserved in its stomach represents the second direct evidence that at least some Mesozoic mammals were carnivorous and fed on other vertebrates, including dinosaurs; a recorded attack on an Archaeornithoides by a Deltatheridium predates its description. More evidence suggesting Repenomamus was suited to a predatory lifestyle was later revealed when a specimen of R. robustus was uncovered alongside an adult Psittacosaurus. The intertwined nature of the fossil, similar to the Fighting Dinosaurs fossil of Mongolia, was likely a byproduct of an altercation between the two animals in which the mammal was most likely the instigator of an ongoing predation attempt. This was posited on the basis that the Repenomamus involved was noted to have been latching on to the Psittacosaurus with its arms and legs while biting the dinosaur. Speciations towards carnivory are known in eutriconodonts as a whole, and similarly large sized species like Gobiconodon, Jugulator and even Triconodon itself are thought to have tackled proportionally large prey as well; evidence of scavenging is even assigned to the former. Like most other non-placental mammals, Repenomamus had epipubic bones, implying that it gave birth to undeveloped young like modern marsupials, or laid eggs like modern monotremes.
Biology and health sciences
Stem-mammals
Animals
28862381
https://en.wikipedia.org/wiki/Thermal%20fluctuations
Thermal fluctuations
In statistical mechanics, thermal fluctuations are random deviations of an atomic system from its average state, that occur in a system at equilibrium. All thermal fluctuations become larger and more frequent as the temperature increases, and likewise they decrease as temperature approaches absolute zero. Thermal fluctuations are a basic manifestation of the temperature of systems: A system at nonzero temperature does not stay in its equilibrium microscopic state, but instead randomly samples all possible states, with probabilities given by the Boltzmann distribution. Thermal fluctuations generally affect all the degrees of freedom of a system: There can be random vibrations (phonons), random rotations (rotons), random electronic excitations, and so forth. Thermodynamic variables, such as pressure, temperature, or entropy, likewise undergo thermal fluctuations. For example, for a system that has an equilibrium pressure, the system pressure fluctuates to some extent about the equilibrium value. Only the 'control variables' of statistical ensembles (such as the number of particules N, the volume V and the internal energy E in the microcanonical ensemble) do not fluctuate. Thermal fluctuations are a source of noise in many systems. The random forces that give rise to thermal fluctuations are a source of both diffusion and dissipation (including damping and viscosity). The competing effects of random drift and resistance to drift are related by the fluctuation-dissipation theorem. Thermal fluctuations play a major role in phase transitions and chemical kinetics. Central limit theorem The volume of phase space , occupied by a system of degrees of freedom is the product of the configuration volume and the momentum space volume. Since the energy is a quadratic form of the momenta for a non-relativistic system, the radius of momentum space will be so that the volume of a hypersphere will vary as giving a phase volume of where is a constant depending upon the specific properties of the system and is the Gamma function. In the case that this hypersphere has a very high dimensionality, , which is the usual case in thermodynamics, essentially all the volume will lie near to the surface where we used the recursion formula . The surface area has its legs in two worlds: (i) the macroscopic one in which it is considered a function of the energy, and the other extensive variables, like the volume, that have been held constant in the differentiation of the phase volume, and (ii) the microscopic world where it represents the number of complexions that is compatible with a given macroscopic state. It is this quantity that Planck referred to as a 'thermodynamic' probability. It differs from a classical probability inasmuch as it cannot be normalized; that is, its integral over all energies diverges—but it diverges as a power of the energy and not faster. Since its integral over all energies is infinite, we might try to consider its Laplace transform which can be given a physical interpretation. The exponential decreasing factor, where is a positive parameter, will overpower the rapidly increasing surface area so that an enormously sharp peak will develop at a certain energy . Most of the contribution to the integral will come from an immediate neighborhood about this value of the energy. This enables the definition of a proper probability density according to whose integral over all energies is unity on the strength of the definition of , which is referred to as the partition function, or generating function. The latter name is due to the fact that the derivatives of its logarithm generate the central moments, namely, and so on, where the first term is the mean energy and the second one is the dispersion in energy. The fact that increases no faster than a power of the energy ensures that these moments will be finite. Therefore, we can expand the factor about the mean value , which will coincide with for Gaussian fluctuations (i.e. average and most probable values coincide), and retaining lowest order terms result in This is the Gaussian, or normal, distribution, which is defined by its first two moments. In general, one would need all the moments to specify the probability density, , which is referred to as the canonical, or posterior, density in contrast to the prior density , which is referred to as the 'structure' function. This is the central limit theorem as it applies to thermodynamic systems. If the phase volume increases as , its Laplace transform, the partition function, will vary as . Rearranging the normal distribution so that it becomes an expression for the structure function and evaluating it at give It follows from the expression of the first moment that , while from the second central moment, . Introducing these two expressions into the expression of the structure function evaluated at the mean value of the energy leads to . The denominator is exactly Stirling's approximation for , and if the structure function retains the same functional dependency for all values of the energy, the canonical probability density, will belong to the family of exponential distributions known as gamma densities. Consequently, the canonical probability density falls under the jurisdiction of the local law of large numbers which asserts that a sequence of independent and identically distributed random variables tends to the normal law as the sequence increases without limit. Distribution about equilibrium The expressions given below are for systems that are close to equilibrium and have negligible quantum effects. Single variable Suppose is a thermodynamic variable. The probability distribution for is determined by the entropy : If the entropy is Taylor expanded about its maximum (corresponding to the equilibrium state), the lowest order term is a Gaussian distribution: The quantity is the mean square fluctuation. Multiple variables The above expression has a straightforward generalization to the probability distribution : where is the mean value of . Fluctuations of the fundamental thermodynamic quantities In the table below are given the mean square fluctuations of the thermodynamic variables and in any small part of a body. The small part must still be large enough, however, to have negligible quantum effects.
Physical sciences
Thermodynamics
Physics
5130291
https://en.wikipedia.org/wiki/Emperor%20angelfish
Emperor angelfish
The emperor angelfish (Pomacanthus imperator) is a species of marine angelfish. It is a reef-associated fish, native to the Indian and Pacific Oceans, from the Red Sea to Hawaii and the Austral Islands. This species is generally associated with stable populations and faces no major threats of extinction. It is a favorite of photographers, artists, and aquarists because of its unique, brilliant pattern of coloration. Description The emperor angelfish shows a marked difference between the juveniles and the adults. The juveniles have a dark blue body which is marked with concentric curving lines, alternating between pale blue and white with the smallest which are completely enclosed within each other located posteriorly. These lines become vertical at the anterior end. The dorsal fin has a white margin and the caudal fin is transparent. The adults are striped with blue and yellow horizontal stripes, a light blue face with a dark blue mask over the eyes and a yellow caudal fin. There is a blackish band above the pectoral fins, the top of which is at the level of the upper orbit. The front margin of this band is bright blue and the rear margin is a thin yellow line. The anal fin has a dark blue background with lighter blue horizontal stripes. The dorsal fin has 13–14 spines and 17–21 soft rays while the anal fin has 3 spines and 18–21 soft rays. This species attains a maximum total length of . Distribution The emperor angelfish has a wide Indo-Pacific distribution. It occurs from the Red Sea southwards along the East African coast to Mozambique and Madagascar, eastwards through the Indian and Pacific Oceans as far as the Tuamotu Islands and the Line Islands. It extends north to the Kansai region and to the southern regions of Japan and south to the Great Barrier Reef of Australia, New Caledonia and the Austral Islands in French Polynesia. Vagrants have been recorded from Hawaii. It has been recorded at several sites off the coast of Florida and off Puerto Rico, and since 2009 as a recent introduction to the eastern Mediterranean basin where it now found in low numbers in a number of localities. Habitat and biology The emperor angelfish is found at depths between . The adults are found in areas where there is a rich growth of corals on clear lagoon, channel, or seaward reefs. Here they are normally observed underneath ledges and within caves. The subadults are frequently recorded in cavities in reefs and along surge channels on seaward reefs. The juveniles frequently shelter below ledges, in reef cavities and in relatively sheltered areas in channels and over outer reef flats. Its diet comprises sponges and other encrusting organisms, as well as tunicates. They form pairs. The juveniles and adults may act as cleaner fish, cleaning ectoparasites off larger fishes. When frightened, these fish can produce a knocking sound. Systematics The emperor angelfish was first formally described in 1787 as Chaetodon imperator by the German physician and naturalist Marcus Elieser Bloch (1723–1799) with the type locality given as Japan. Some authorities place this species in the subgenus Acanthochaetodon. The specific name imperator means "emperor" and reflects the Dutch name Keyser van Iapan meaning "Emperor of Japan" coined by the publisher Louis Renard (ca. 1678–1746) in 1719, possibly reflecting its majestic appearance. Utilisation The emperor angelfish is common in the aquarium trade. Gallery
Biology and health sciences
Acanthomorpha
Animals
20294004
https://en.wikipedia.org/wiki/Psocodea
Psocodea
Psocodea is a taxonomic group of insects comprising the bark lice, book lice and parasitic lice. It was formerly considered a superorder, but is now generally considered by entomologists as an order. Despite the greatly differing appearance of parasitic lice (Phthiraptera), they are believed to have evolved from within the former order Psocoptera, which contained the bark lice and book lice, now found to be paraphyletic. They are often regarded as the most primitive of the hemipteroids. Psocodea contains around 11,000 species, divided among four suborders and more than 70 families. They range in size from 1–10 millimetres (0.04–0.4 in) in length. The species known as booklice received their common name because they are commonly found amongst old books—they feed upon the paste used in binding. The barklice are found on trees, feeding on algae and lichen. Anatomy and biology Psocids are small, scavenging insects with a relatively generalized body plan. They feed primarily on fungi, algae, lichen, and organic detritus in nature but are also known to feed on starch-based household items like grains, wallpaper glue and book bindings. They have chewing mandibles, and the central lobe of the maxilla is modified into a slender rod. This rod is used to brace the insect while it scrapes up detritus with its mandibles. They also have a swollen forehead, large compound eyes, and three ocelli. Their bodies are soft with a segmented abdomen. Some species can spin silk from glands in their mouth. They may festoon large sections of trunk and branches in dense swathes of silk. Some psocids have small ovipositors that are up to 1.5 times as long as the hindwings, and all four wings have a relatively simple venation pattern, with few cross-veins. The wings, if present, are held tent-like over the body. The legs are slender and adapted for jumping, rather than gripping, as in the true lice. The abdomen has nine segments, and no cerci. There is often considerable variation in the appearance of individuals within the same species. Many have no wings or ovipositors, and may have a different shape to the thorax. Other, more subtle, variations are also known, such as changes to the development of the setae. The significance of such changes is uncertain, but their function appears to be different from similar variations in, for example, aphids. Like aphids, however, many psocids are parthenogenic, and the presence of males may even vary between different races of the same species. Psocids lay their eggs in minute crevices or on foliage, although a few species are known to be viviparous. The young are born as miniature, wingless versions of the adult. These nymphs typically molt six times before reaching full adulthood. The total lifespan of a psocid is rarely more than a few months. Booklice range from approximately 1 mm to 2 mm in length (″ to ″). Some species are wingless and they are easily mistaken for bedbug nymphs and vice versa. Booklouse eggs take two to four weeks to hatch and can reach adulthood approximately two months later. Adult booklice can live for six months. Besides damaging books, they also sometimes infest food storage areas, where they feed on dry, starchy materials. Although some psocids feed on starchy household products, the majority of psocids are woodland insects with little to no contact with humans, therefore they are of little economic importance. They are scavengers and do not bite humans. Psocids can affect the ecosystems in which they reside. Many psocids can affect decomposition by feeding on detritus, especially in environments with lower densities of predacious micro arthropods that may eat psocids. The nymph of a psocid species, Psilopsocus mimulus, is the first known wood-boring psocopteran. These nymphs make their own burrows in woody material, rather than inhabiting vacated, existing burrows. This boring activity can create habitats that other organisms may use. Interaction with humans Some species of psocids, such as Liposcelis bostrychophila, are common pests of stored products. Psocids, among other arthropods, have been studied to develop new pest control techniques in food manufacturing. One study found that modified atmospheres during packing (MAP) helped to control the reoccurrence of pests during the manufacturing process and prevented further infestation in the final products that go to consumers. External phylogeny Psocodea has been recovered as a monophyletic group in recent studies. Their next closest relatives are traditionally recognized as the monophyletic grouping Condylognatha that contains Hemiptera (true bugs) and Thysanoptera (thrips), which all combined form the group Paraneoptera. However, this is somewhat unclear, as analysis has shown that Psocodea could instead be the sister taxon to Holometabola, which would render Paraneoptera as paraphyletic. Here is a simple cladogram showing the traditional relationships with a monophyletic Paraneoptera: Here is an alternative cladogram showing Paraneoptera as paraphyletic, with Psocodea as sister taxon to Holometabola: Internal phylogeny Here is a cladogram showing the relationships within Psocodea: Classification The order Psocodea (formerly 'Psocoptera') is divided into three extant suborders. Suborder Trogiomorpha Trogiomorpha have antennae with many segments (22–50 antennomeres) and always three-segmented tarsi. Trogiomorpha is the smallest suborder of the Psocoptera sensu stricto (i.e., excluding Phthiraptera), with about 340 species in 7 families, ranging from the fossil family Archaeatropidae with only a handful of species to the speciose Lepidopsocidae (over 200 species). Trogiomorpha comprises infraorder Atropetae (extant families Lepidopsocidae, Psoquillidae and Trogiidae, and fossil families Archaeatropidae and Empheriidae) and infraorder Psocathropetae (families Psyllipsocidae and Prionoglarididae). Suborder Troctomorpha Troctomorpha have antennae with 15–17 segments and two-segmented tarsi. Troctomorpha comprises the Infraorder Amphientometae (families Amphientomidae, Compsocidae, Electrentomidae, Musapsocidae, Protroctopsocidae and Troctopsocidae) and Infraorder Nanopsocetae (families Liposcelididae, Pachytroctidae and Sphaeropsocidae). Troctomorpha are now known to also contain the order Phthiraptera (lice), and are therefore paraphyletic, as are Psocoptera as a whole. Some Troctomorpha, such as Liposcelis (which are similar to lice in morphology), are often found in birds' nests, and it is possible that a similar behavior in the ancestors of lice is at the origin of the parasitism seen today. Suborder Psocomorpha Psocomorpha are notable for having antennae with 13 segments. They have two- or three-segmented tarsi, this condition being constant (e.g., Psocidae) or variable (e.g., Pseudocaeciliidae) within families. Their wing venation is variable, the most common type being that found in the genus Caecilius (rounded, free areola postica, thickened, free pterostigma, r+s two-branched, m three-branched). Additional veins are found in some families and genera (Dicropsocus and Goja in Epipsocidae, many Calopsocidae, etc.) Psocomorpha is the largest suborder of the Psocoptera sensu stricto (i.e., excluding Phthiraptera), with about 3,600 species in 24 families, ranging from the species-poor Bryopsocidae (2 spp.) to the speciose Psocidae (about 900 spp). Psocomorpha comprises Infraorder Epipsocetae (families Cladiopsocidae, Dolabellopsocidae, Epipsocidae, Neurostigmatidae and Ptiloneuridae), Infraorder Caeciliusetae (families Amphipsocidae, Asiopsocidae, Caeciliusidae, Dasydemellidae and Stenopsocidae), Infraorder Homilopsocidea (families Archipsocidae, Bryopsocidae, Calopsocidae, Ectopsocidae, Elipsocidae, Lachesillidae, Mesopsocidae, Peripsocidae, Philotarsidae, Pseudocaeciliidae and Trichopsocidae) and Infraorder Psocetae (families Hemipsocidae, Myopsocidae, Psilopsocidae and Psocidae).
Biology and health sciences
Insects: General
Animals
24318825
https://en.wikipedia.org/wiki/Lamination%20%28geology%29
Lamination (geology)
In geology, lamination () is a small-scale sequence of fine layers (: laminae; : lamina) that occurs in sedimentary rocks. Laminae are normally smaller and less pronounced than bedding. Lamination is often regarded as planar structures one centimetre or less in thickness, whereas bedding layers are greater than one centimetre. However, structures from several millimetres to many centimetres have been described as laminae. A single sedimentary rock can have both laminae and beds. Description Lamination consists of small differences in the type of sediment that occur throughout the rock. They are caused by cyclic changes in the supply of sediment. These changes can occur in grain size, clay percentage, microfossil content, organic material content or mineral content and often result in pronounced differences in colour between the laminae. Weathering can make the differences even more clear. Lamination can occur as parallel structures (parallel lamination) or in different sets that make an angle with each other (cross-lamination). It can occur in many different types of sedimentary rock, from coarse sandstone to fine shales, mudstones or in evaporites. Because lamination is a small structure, it is easily destroyed by bioturbation (the activity of burrowing organisms) shortly after deposition. Lamination therefore survives better under anoxic circumstances, or when the sedimentation rate was high and the sediment was buried before bioturbation could occur. Origin Lamination develops in fine grained sediment when fine grained particles settle, which can only happen in quiet water. Examples of sedimentary environments are deep marine (at the seafloor) or lacustrine (at the bottom of a lake), or mudflats, where the tide creates cyclic differences in sediment supply. Laminae formed in glaciolacustrine environments (in glacier lakes) are a special case. They are called varves. Quaternary varves are used in stratigraphy and palaeoclimatology to reconstruct climate changes during the last few hundred thousand years. Lamination in sandstone is often formed in a coastal environment, where wave energy causes a separation between grains of different sizes.
Physical sciences
Sedimentology
Earth science
6742209
https://en.wikipedia.org/wiki/Native%20metal
Native metal
A native metal is any metal that is found pure in its metallic form in nature. Metals that can be found as native deposits singly or in alloys include antimony, arsenic, bismuth, cadmium, chromium, cobalt, indium, iron, manganese, molybdenum, nickel, niobium, rhenium, tantalum, tellurium, tin, titanium, tungsten, vanadium, and zinc, as well as the gold group (gold, copper, lead, aluminium, mercury, silver) and the platinum group (platinum, iridium, osmium, palladium, rhodium, ruthenium). Among the alloys found in native state have been brass, bronze, pewter, German silver, osmiridium, electrum, white gold, silver-mercury amalgam, and gold-mercury amalgam. Only gold, silver, copper and the platinum group occur native in large amounts. Over geological time scales, very few metals can resist natural weathering processes like oxidation, so mainly the less reactive metals such as gold and platinum are found as native metals. The others usually occur as isolated pockets where a natural chemical process reduces a common compound or ore of the metal, leaving the pure metal behind as small flakes or inclusions. Metals are not the only type of chemical element that can occur in the native state. Non-metallic elements occurring in the native state include carbon, sulfur, and selenium. Silicon, a semi-metal, has rarely been found in the native state as small inclusions in gold. Native metals were prehistoric man's only access to metal, since the process of extracting metals from their ores (smelting) is thought to have been discovered around 6500 BC. However, native metals could be found only in impractically small amounts, so while copper and iron were known well before the Copper Age and Iron Age, they did not have a large impact until smelting appeared. Occurrence Gold Most gold is mined as native metal and can be found as nuggets, veins or wires of gold in a rock matrix, or fine grains of gold, mixed in with sediments or bound within rock. The iconic image of gold mining for many is gold panning, which is a method of separating flakes and nuggets of pure gold from river sediments due to their great density. Native gold is the predominant gold mineral on the earth. It is sometimes found alloyed with silver and/or other metals, but true gold compound minerals are uncommon, mainly a handful of selenides and tellurides. Silver Native silver occurs as elongated dendritic coatings or irregular masses. It may also occur as cubic, octahedral, or dodecahedral crystals. It may occur alloyed with gold as electrum. It often occurs with silver sulfide and sulfosalt minerals. Various amalgams of silver and mercury or other metals and mercury do occur rarely as minerals in nature. An example is the mineral eugenite (Ag11Hg2) and related forms. Silver nuggets, wires, and grains are relatively common, but there are also a large number of silver compound minerals owing to silver being more reactive than gold. Platinum group Natural alloys of the platinum group metals include: native osmium (), rutheniridosmine (), ruthenium (), palladium (), platinum Pt, and rhodium (. In addition, gold, copper, iron, mercury, tin, and lead may occur in alloys of this group. As with gold, salts and other compounds of the platinum group metals are rare; native platinum and related metals and alloys are the predominant minerals bearing these metals. These metals occur associated with ultramafic intrusions, and placer deposits derived from those intrusions. Copper Native copper has been historically mined as an early source of the metal. The term Old Copper Complex is used to describe an ancient North American civilization that utilized native copper deposits for weapons, tools, and decorative objects. This society existed around Lake Superior, where they found sources of native copper and mined them between 6000 and 3000 BC. Copper would have been especially useful to ancient humans as it was much stronger than gold, hard enough to be made into useful items such as fishhooks and woodworking tools, but still soft enough to be easily shaped, unlike meteoric iron. The same deposits of native copper on the Keweenaw Peninsula and Isle Royale were later mined commercially. From 1845 until 1887, the Michigan Copper Country was the leading producer of copper in the United States. Masses of native copper weighing hundreds of tons were sometimes found in the mines. The spectrum of copper minerals closely resembles that of silver, ranging from oxides of its multiple oxidation states through sulfides and silicates to halides and chlorates, iodates, nitrates and others. Natural alloys of copper (particularly with silver; the two metals can also be found in separate but co-mingled masses) are also found. Iron, nickel and cobalt Telluric iron (Earth born) is very rare, with only one major deposit known in the world, located on or near Disko Island in Greenland. Most of the native iron on earth is actually not in fact "native", in the traditional sense, to Earth. It mainly comes from iron-nickel meteorites that formed millions of years ago but were preserved from chemical attack by the vacuum of space, and fell to the earth a relatively short time ago. Metallic meteorites are composed primarily of the iron-nickel alloys: taenite (high nickel content) and kamacite (low nickel content). However, there are a few areas on earth where truly native iron can be found. Native nickel has been described in serpentinite due to hydrothermal alteration of ultramafic rocks in New Caledonia and elsewhere. Metallic cobalt has been reported in the Canadian Lorraine Mine, Cobalt-Gowganda region, the Timiskaming District, Ontario, Canada, and in the Aidyrlya gold deposit in Orenburgskaya Oblast of the Southern Urals. Others All other native metals occur only in small quantities or are found in geologically special regions. For example, metallic cadmium was only found at two locations including the Vilyuy River basin in Siberia. Native molybdenum has been found in lunar regolith and in the Koryakskii volcano in Kamchatka Oblast of Russia. Elsewhere in this region native indium, aluminium, tantalum, tellurium, and other metals have been reported. Native lead is quite rare but somewhat more widespread, as are tin, mercury, arsenic, antimony, and bismuth. Native chromium has been found in small grains in Sichuan, China and other locations.
Physical sciences
Minerals
Earth science
6746613
https://en.wikipedia.org/wiki/Procambarus%20alleni
Procambarus alleni
The Everglades crayfish (Procambarus alleni), sometimes called the Florida crayfish, the blue crayfish, the electric blue crayfish, or the sapphire crayfish, is a species of freshwater crayfish endemic to Florida in the United States. Its natural range is the area east of St. Johns River and all of Florida from Levy County and Marion County southwards, as well as on some of the Florida Keys. It is included on the IUCN Red List as a species of Least Concern. The blue crayfish is frequently kept in a freshwater aquaria. In the wild, this species varies from brown-tan to blue, but an aquarium strain has been selectively bred to achieve a brilliant cobalt blue color. It should not be confused with the burrowing Cambarus monongalensis, also known as the blue crayfish, but native to Pennsylvania, Virginia and West Virginia. Gallery
Biology and health sciences
Crayfishes and lobsters
Animals
6748280
https://en.wikipedia.org/wiki/Material
Material
A material is a substance or mixture of substances that constitutes an object. Materials can be pure or impure, living or non-living matter. Materials can be classified on the basis of their physical and chemical properties, or on their geological origin or biological function. Materials science is the study of materials, their properties and their applications. Raw materials can be processed in different ways to influence their properties, by purification, shaping or the introduction of other materials. New materials can be produced from raw materials by synthesis. In industry, materials are inputs to manufacturing processes to produce products or more complex materials. Historical elements Materials chart the history of humanity. The system of the three prehistoric ages (Stone Age, Bronze Age, Iron Age) were succeeded by historical ages: steel age in the 19th century, polymer age in the middle of the following century (plastic age) and silicon age in the second half of the 20th century. Classification by use Materials can be broadly categorized in terms of their use, for example: Building materials are used for construction Building insulation materials are used to retain heat within buildings Refractory materials are used for high-temperature applications Nuclear materials are used for nuclear power and weapons Aerospace materials are used in aircraft and other aerospace applications Biomaterials are used for applications interacting with living systems Material selection is a process to determine which material should be used for a given application. Classification by structure The relevant structure of materials has a different length scale depending on the material. The structure and composition of a material can be determined by microscopy or spectroscopy. Microstructure In engineering, materials can be categorised according to their microscopic structure: Plastics: a wide range of synthetic or semi-synthetic materials that use polymers as a main ingredient. Ceramics: non-metal, inorganic solids Glasses: amorphous solids Crystals: a solid material whose constituents (such as atoms, molecules, or ions) are arranged in a highly ordered microscopic structure, forming a crystal lattice that extends in all directions. Metals: pure or combined chemical elements with specific chemical bonding behavior Alloys: a mixture of chemical elements of which at least one is often a metal. Polymers: materials based on long carbon or silicon chains Hybrids: Combinations of multiple materials, for example composites. Larger-scale structure A metamaterial is any material engineered to have a property that is not found in naturally occurring materials, usually by combining several materials to form a composite and / or tuning the shape, geometry, size, orientation and arrangement to achieve the desired property. In foams and textiles, the chemical structure is less relevant to immediately observable properties than larger-scale material features: the holes in foams, and the weave in textiles. Classification by properties Materials can be compared and classified by their large-scale physical properties. Mechanical properties Mechanical properties determine how a material responds to applied forces. Examples include: Stiffness Strength Toughness Hardness Thermal properties Materials may degrade or undergo changes of properties at different temperatures. Thermal properties also include the material's thermal conductivity and heat capacity, relating to the transfer and storage of thermal energy by the material. Other properties Materials can be compared and categorized by any quantitative measure of their behavior under various conditions. Notable additional properties include the optical, electrical, and magnetic behavior of materials.
Physical sciences
Substance
Chemistry
21346360
https://en.wikipedia.org/wiki/Salvia
Salvia
Salvia () is the largest genus of plants in the sage family Lamiaceae, with nearly 1,000 species of shrubs, herbaceous perennials, and annuals. Within the Lamiaceae, Salvia is part of the tribe Mentheae within the subfamily Nepetoideae. One of several genera, commonly referred to as sage, it includes two widely used herbs, Salvia officinalis (common sage, or just "sage") and Salvia rosmarinus (rosemary, formerly Rosmarinus officinalis). The genus is distributed throughout the Old World and the Americas (over 900 total species), with three distinct regions of diversity: Central America and South America (approximately 600 species); Central Asia and the Mediterranean (250 species); Eastern Asia (90 species). Etymology The name Salvia derives from Latin (sage), from (safe, secure, healthy), an adjective related to (health, well-being, prosperity or salvation), and (to feel healthy, to heal). Pliny the Elder was the first author known to describe a plant called "Salvia" by the Romans, likely describing the type species for the genus Salvia, Salvia officinalis. The common modern English name sage derives from Middle English , which was borrowed from Old French , from Latin (the source of the botanical name). When used without modifiers, the name "sage" generally refers to Salvia officinalis ("common sage" or "culinary sage"), although it is used with modifiers to refer to any member of the genus. The ornamental species are commonly referred to by their genus name Salvia. Description Salvia species include annual, biennial, or perennial herbaceous plants, along with woody subshrubs. The stems are typically angled like other members in Lamiaceae. The leaves are typically entire, but sometimes toothed or pinnately divided. The flowering stems bear small bracts, dissimilar to the basal leaves—in some species the bracts are ornamental and showy. The flowers are produced in racemes or panicles, and generally produce a showy display with flower colors ranging from blue to red, with white and yellow less common. The calyx is normally tubular or bell shaped, without bearded throats, and divided into two parts or lips, the upper lip entire or three-toothed, the lower two-cleft. The corollas are often claw shaped and are two-lipped. The upper lip is usually entire or three-toothed. The lower lip typically has two lobes. The stamens are reduced to two short structures with anthers two-celled, the upper cell fertile, and the lower imperfect. The flower styles are two-cleft. The fruits are smooth ovoid or oblong nutlets and in many species they have a mucilaginous coating. Many members of Salvia have trichomes (hairs) growing on the leaves, stems and flowers, which help to reduce water loss in some species. Sometimes the hairs are glandular and secrete volatile oils that typically give a distinct aroma to the plant. When the hairs are rubbed or brushed, some of the oil-bearing cells are ruptured, releasing the oil. This often results in the plant being unattractive to grazing animals and some insects. Staminal lever mechanism The defining characteristic of the genus Salvia is the unusual pollination mechanism. It is central to any investigation into the systematics, species distribution, or pollination biology of Salvia. It consists of two stamens (instead of the typical four found in other members of the tribe Mentheae) and the two thecae on each stamen are separated by an elongate connective which enables the formation of the lever mechanism. Sprengel (1732) was the first to illustrate and describe the nototribic (dorsal) pollination mechanism in Salvia. When a pollinator probes a male stage flower for nectar, (pushing the posterior anther theca) the lever causes the stamens to move and the pollen to be deposited on the pollinator. When the pollinator withdraws from the flower, the lever returns the stamens to their original position. In older, female stage flowers, the stigma is bent down in a general location that corresponds to where the pollen was deposited on the pollinator's body. The lever of most Salvia species is not specialized for a single pollinator, but is generic and selected to be easily released by many bird and bee pollinators of varying shapes and sizes. The lever arm can be specialized to be different lengths so that the pollen is deposited on different parts of the pollinator's body. For example, if a bee went to one flower and pollen was deposited on the far back of her body, but then it flew to another flower where the stigma was more forward (anterior), pollination could not take place. This can result in reproductive isolation from the parental population and new speciation can occur. It is believed that the lever mechanism is a key factor in the speciation, adaptive radiation, and diversity of this large genus. Taxonomy History George Bentham was first to give a full monographic account of the genus in 1832–1836, and based his classifications on staminal morphology. Bentham's work on classifying the family Labiatae (Labiatarum Genera et Species (1836)) is still the only comprehensive and global organization of the family. While he was clear about the integrity of the overall family, he was less confident about his organization of Salvia, the largest genus in Labiatae (also called Lamiaceae). Based on his own philosophy of classification, he wrote that he "ought to have formed five or six genera" out of Salvia. In the end, he felt that the advantage in placing a relatively uniform grouping in one genus was "more than counterbalanced by the necessity of changing more than two hundred names." At that time there were only 291 known Salvia species. Subdivision Bentham eventually organized Salvia into twelve sections (originally fourteen), based on differences in corolla, calyx, and stamens. These were placed into four subgenera that were generally divided into Old World and New World species: Subgenus Salvia: Old World (sections: Hymenosphace, Eusphace, Drymosphace) Subgenus Sclarea: Old World (sections: Horminum, Aethiposis, Plethiosphace) Subgenus Calosphace: New World (section: Calosphace) Subgenus Leonia: Old and New World (sections: Echinosphace, Pycnosphace, Heterosphace, Notiosphace, Hemisphace) His system is still the most widely studied classification of Salvia, even though more than 500 new species have been discovered since his work. Other botanists have since offered modified versions of Bentham's classification system, while botanists in the last hundred years generally do not endorse Bentham's system. It was long assumed that Salvia'''s unusual pollination and stamen structure had evolved only once, and that therefore Salvia was monophyletic, meaning that all members of the genus evolved from one ancestor. However, the immense diversity in staminal structure, vegetative habit, and floral morphology of the species within Salvia has opened the debate about its infrageneric classifications. Phylogenetic analyses Through DNA sequencing, Salvia was shown to not be monophyletic but to consist of three separate clades (Salvia clades I–III) each with different sister groups. They also found that the staminal lever mechanism evolved at least two separate times, through convergent evolution. Walker and Sytsma (2007) clarified this parallel evolution in a later paper combining molecular and morphological data to prove three independent lineages of the Salvia lever mechanism, each corresponding to a clade within the genus. It is surprising to see how similar the staminal lever mechanism structures are between the three lineages, so Salvia proves to be an interesting but excellent example of convergent evolution. Walker and Sytsma (2007) also addressed the question of whether Salvia is truly polyphyletic or just paraphyletic within the tribe Mentheae. To make Salvia monophyletic would require the inclusion of 15 species from Rosmarinus, Perovskia, Dorystaechas, Meriandra, and Zhumeria genera. The information attained by Walker and Sytsma (2007) supporting the three independent origins of the staminal lever indicate that Salvia is not the case where 15 species (currently not members of the genus) are actually members of Salvia but underwent character reversals—in other words, Salvia is paraphyletic as previously circumscribed. In 2017 Drew et al. recircumscribed Salvia, proposing that the five small embedded genera (Dorystaechas, Meriandra, Perovskia, Rosmarinus, and Zhumeria) be subsumed into a broadly defined Salvia. This approach would require only 15 name changes whereas maintaining the five small genera and renaming various Salvia taxa would require over 700 name changes. The circumscription of individual species within Salvia has undergone constant revision. Many species are similar to each other, and many species have varieties that have been given different specific names. There have been as many as 2,000 named species and subspecies. Over time, the number has been reduced to less than a thousand. A modern and comprehensive study of Salvia species was done by Gabriel Alziar, in his Catalogue Synonymique des Salvia du Monde (1989) (World Catalog of Salvia Synonyms). He found that the number of distinct species and subspecies could be reduced to less than 700.Clebsch, p. 18. Selected species and their uses Many species are used as herbs, as ornamental plants (usually for flower interest), and sometimes for their ornamental and aromatic foliage. Some species, such as Salvia columbariae and Salvia hispanica, are also grown for their seeds. The Plant List has 986 accepted species names. A selection of some well-known species is below.Salvia apiana: white sage; sacred to a number of Native American peoples, and used by some tribes in their ceremoniesSalvia azurea: blue sageSalvia buchananii: Buchanan sage; woody-based stoloniferous perennial, deep pink flowersSalvia cacaliifolia: blue vine sage or Guatemalan sage; pure gentian-blue flowersSalvia candelabrum: candelabrum sage; woody-based perennial, violet flowersSalvia columbariae: wild chia; annual plant with seeds that are sometimes used like those of Salvia hispanicaSalvia dianthera Roth: Bengal sageSalvia divinorum: diviner's sage; sometimes cultivated for hallucinogenic effects; the legality of its use is under review in some US statesSalvia elegans: pineapple sage; widely grown as an ornamental shrub or sub-shrub, with pineapple scented leavesSalvia farinacea: Mealycup sage, mealy sage; perennial with flowers ranging from purple to blue, Used as an ornamental plantSalvia fruticosa: Greek sage; commonly grown and harvested as an alternative to common sageSalvia fulgens: Cardinal sage, Mexican scarlet sage; small evergreen sub-shrub, red flowersSalvia guaranitica: hummingbird sage, anise-scented sage; tall perennial, deep blue flowersSalvia hispanica: chia; produces edible seeds high in protein and in the omega-3 fatty acid, α-linolenic acid (ALA)Salvia involucrata: roseleaf sage; woody-based perennialSalvia jurisicii: Ovche Pole sage; a rare, compact "feathery" perennial endemic to North Macedonia, violet flowersSalvia leucantha: Mexican bush sage, woolly sage; ornamental evergreen subshrub, white/pink flowersSalvia microphylla: baby sage: small ornamental shrub from Mexico, widely cultivated with many cultivarsSalvia miltiorrhiza: red sage, Danshen; Chinese medicinal herbSalvia nemorosa: woodland sage, Balkan clary; perennial with many ornamental varieties and cultivarsSalvia officinalis: sage, common sage; used widely in cooking, as an ornamental, and in herbal medicineSalvia patens: gentian sage; herbaceous perennial, blue flowersSalvia pratensis: clary: herbaceous perennial, violet flowersSalvia rosmarinus: rosemary; woody shrub, blue flowersSalvia sclarea: clary; grown as an ornamental and to some extent for perfume oilsSalvia spathacea: California hummingbird sage, pitcher sage; ornamental, fruit-scented with rose pink flowersSalvia splendens: scarlet sage; popular tender ornamental bedding or pot plant.Salvia uliginosa: bog sage; herbaceous perennial, blue flowers Ecology HerbivorySalvia species are used as food plants by the larvae of some Lepidoptera (butterfly and moth) species including the bucculatricid leaf-miner Bucculatrix taeniola which feeds exclusively on the genus and the Coleophora case-bearers C. aegyptiacae, C. salviella (both feed exclusively on Salvia aegyptiaca), C. ornatipennella and C. virgatella (both recorded on Salvia pratensis). Hybrids Many interspecific hybrids occur naturally, with a relatively high degree of crossability, but some, Salvia fruticosa × Salvia tomentosa, have been intentional. A natural hybrid, Salvia longispicata × Salvia farinacea has given rise to a series of popular ornamentals such as Salvia 'Indigo Spires' and Salvia 'Balsalmisp'. AGM cultivars Numerous garden-worthy cultivars and varieties have been produced, often with mixed or unknown parentage. The following have gained the Royal Horticultural Society's Award of Garden Merit:Salvia 'Amistad': bushy upright perennial, deep blue/purple flowers Salvia 'Dyson's Joy': small, bushy perennial, bicolor red/pink flowersSalvia 'Hot Lips': bushy evergreen, red/white flowers Salvia 'Jezebel': bushy evergreen perennial, red flowersSalvia 'Nachtvlinder': bushy evergreen perennial, purple flowers Salvia 'Ribambelle': bushy perennial, salmon-pink flowers Salvia 'Royal Bumble': evergreen shrub, red flowersSalvia × jamensis 'Javier': bushy perennial, purple flowers Salvia × jamensis 'Los Lirios': bushy shrub, pink flowers Salvia × jamensis 'Peter Vidgeon': bushy perennial, pale pink flowers Salvia × jamensis 'Raspberry Royale': evergreen subshrub, raspberry pink flowers Salvia × superba 'Rubin': clump-forming perennial, pale pink flowers Salvia × sylvestris 'Blauhügel': herbaceous perennial, violet-blue flowers Salvia × sylvestris 'Mainacht': compact perennial, deep violet flowers Salvia × sylvestris 'Tänzerin': perennial, purple flowers
Biology and health sciences
Lamiales
null
21346982
https://en.wikipedia.org/wiki/Kernel%20%28operating%20system%29
Kernel (operating system)
A kernel is a computer program at the core of a computer's operating system that always has complete control over everything in the system. The kernel is also responsible for preventing and mitigating conflicts between different processes. It is the portion of the operating system code that is always resident in memory and facilitates interactions between hardware and software components. A full kernel controls all hardware resources (e.g. I/O, memory, cryptography) via device drivers, arbitrates conflicts between processes concerning such resources, and optimizes the utilization of common resources e.g. CPU & cache usage, file systems, and network sockets. On most systems, the kernel is one of the first programs loaded on startup (after the bootloader). It handles the rest of startup as well as memory, peripherals, and input/output (I/O) requests from software, translating them into data-processing instructions for the central processing unit. The critical code of the kernel is usually loaded into a separate area of memory, which is protected from access by application software or other less critical parts of the operating system. The kernel performs its tasks, such as running processes, managing hardware devices such as the hard disk, and handling interrupts, in this protected kernel space. In contrast, application programs such as browsers, word processors, or audio or video players use a separate area of memory, user space. This separation prevents user data and kernel data from interfering with each other and causing instability and slowness, as well as preventing malfunctioning applications from affecting other applications or crashing the entire operating system. Even in systems where the kernel is included in application address spaces, memory protection is used to prevent unauthorized applications from modifying the kernel. The kernel's interface is a low-level abstraction layer. When a process requests a service from the kernel, it must invoke a system call, usually through a wrapper function. There are different kernel architecture designs. Monolithic kernels run entirely in a single address space with the CPU executing in supervisor mode, mainly for speed. Microkernels run most but not all of their services in user space, like user processes do, mainly for resilience and modularity. MINIX 3 is a notable example of microkernel design. The Linux kernel is both monolithic and modular, since it can insert and remove loadable kernel modules at runtime. This central component of a computer system is responsible for executing programs. The kernel takes responsibility for deciding at any time which of the many running programs should be allocated to the processor or processors. Random-access memory Random-access memory (RAM) is used to store both program instructions and data. Typically, both need to be present in memory in order for a program to execute. Often multiple programs will want access to memory, frequently demanding more memory than the computer has available. The kernel is responsible for deciding which memory each process can use, and determining what to do when not enough memory is available. Input/output devices I/O devices include, but are not limited to, peripherals such as keyboards, mice, disk drives, printers, USB devices, network adapters, and display devices. The kernel provides convenient methods for applications to use these devices which are typically abstracted by the kernel so that applications do not need to know their implementation details. Resource management Key aspects necessary in resource management are defining the execution domain (address space) and the protection mechanism used to mediate access to the resources within a domain. Kernels also provide methods for synchronization and inter-process communication (IPC). These implementations may be located within the kernel itself or the kernel can also rely on other processes it is running. Although the kernel must provide IPC in order to provide access to the facilities provided by each other, kernels must also provide running programs with a method to make requests to access these facilities. The kernel is also responsible for context switching between processes or threads. Memory management The kernel has full access to the system's memory and must allow processes to safely access this memory as they require it. Often the first step in doing this is virtual addressing, usually achieved by paging and/or segmentation. Virtual addressing allows the kernel to make a given physical address appear to be another address, the virtual address. Virtual address spaces may be different for different processes; the memory that one process accesses at a particular (virtual) address may be different memory from what another process accesses at the same address. This allows every program to behave as if it is the only one (apart from the kernel) running and thus prevents applications from crashing each other. On many systems, a program's virtual address may refer to data which is not currently in memory. The layer of indirection provided by virtual addressing allows the operating system to use other data stores, like a hard drive, to store what would otherwise have to remain in main memory (RAM). As a result, operating systems can allow programs to use more memory than the system has physically available. When a program needs data which is not currently in RAM, the CPU signals to the kernel that this has happened, and the kernel responds by writing the contents of an inactive memory block to disk (if necessary) and replacing it with the data requested by the program. The program can then be resumed from the point where it was stopped. This scheme is generally known as demand paging. Virtual addressing also allows creation of virtual partitions of memory in two disjointed areas, one being reserved for the kernel (kernel space) and the other for the applications (user space). The applications are not permitted by the processor to address kernel memory, thus preventing an application from damaging the running kernel. This fundamental partition of memory space has contributed much to the current designs of actual general-purpose kernels and is almost universal in such systems, although some research kernels (e.g., Singularity) take other approaches. Device management To perform useful functions, processes need access to the peripherals connected to the computer, which are controlled by the kernel through device drivers. A device driver is a computer program encapsulating, monitoring and controlling a hardware device (via its Hardware/Software Interface (HSI)) on behalf of the OS. It provides the operating system with an API, procedures and information about how to control and communicate with a certain piece of hardware. Device drivers are an important and vital dependency for all OS and their applications. The design goal of a driver is abstraction; the function of the driver is to translate the OS-mandated abstract function calls (programming calls) into device-specific calls. In theory, a device should work correctly with a suitable driver. Device drivers are used for e.g. video cards, sound cards, printers, scanners, modems, and Network cards. At the hardware level, common abstractions of device drivers include: Interfacing directly Using a high-level interface (Video BIOS) Using a lower-level device driver (file drivers using disk drivers) Simulating work with hardware, while doing something entirely different And at the software level, device driver abstractions include: Allowing the operating system direct access to hardware resources Only implementing primitives Implementing an interface for non-driver software such as TWAIN Implementing a language (often a high-level language such as PostScript) For example, to show the user something on the screen, an application would make a request to the kernel, which would forward the request to its display driver, which is then responsible for actually plotting the character/pixel. A kernel must maintain a list of available devices. This list may be known in advance (e.g., on an embedded system where the kernel will be rewritten if the available hardware changes), configured by the user (typical on older PCs and on systems that are not designed for personal use) or detected by the operating system at run time (normally called plug and play). In plug-and-play systems, a device manager first performs a scan on different peripheral buses, such as Peripheral Component Interconnect (PCI) or Universal Serial Bus (USB), to detect installed devices, then searches for the appropriate drivers. As device management is a very OS-specific topic, these drivers are handled differently by each kind of kernel design, but in every case, the kernel has to provide the I/O to allow drivers to physically access their devices through some port or memory location. Important decisions have to be made when designing the device management system, as in some designs accesses may involve context switches, making the operation very CPU-intensive and easily causing a significant performance overhead. System calls In computing, a system call is how a process requests a service from an operating system's kernel that it does not normally have permission to run. System calls provide the interface between a process and the operating system. Most operations interacting with the system require permissions not available to a user-level process, e.g., I/O performed with a device present on the system, or any form of communication with other processes requires the use of system calls. A system call is a mechanism that is used by the application program to request a service from the operating system. They use a machine-code instruction that causes the processor to change mode. An example would be from supervisor mode to protected mode. This is where the operating system performs actions like accessing hardware devices or the memory management unit. Generally the operating system provides a library that sits between the operating system and normal user programs. Usually it is a C library such as Glibc or Windows API. The library handles the low-level details of passing information to the kernel and switching to supervisor mode. System calls include close, open, read, wait and write. To actually perform useful work, a process must be able to access the services provided by the kernel. This is implemented differently by each kernel, but most provide a C library or an API, which in turn invokes the related kernel functions. The method of invoking the kernel function varies from kernel to kernel. If memory isolation is in use, it is impossible for a user process to call the kernel directly, because that would be a violation of the processor's access control rules. A few possibilities are: Using a software-simulated interrupt. This method is available on most hardware, and is therefore very common. Using a call gate. A call gate is a special address stored by the kernel in a list in kernel memory at a location known to the processor. When the processor detects a call to that address, it instead redirects to the target location without causing an access violation. This requires hardware support, but the hardware for it is quite common. Using a special system call instruction. This technique requires special hardware support, which common architectures (notably, x86) may lack. System call instructions have been added to recent models of x86 processors, however, and some operating systems for PCs make use of them when available. Using a memory-based queue. An application that makes large numbers of requests but does not need to wait for the result of each may add details of requests to an area of memory that the kernel periodically scans to find requests. Kernel design decisions Protection An important consideration in the design of a kernel is the support it provides for protection from faults (fault tolerance) and from malicious behaviours (security). These two aspects are usually not clearly distinguished, and the adoption of this distinction in the kernel design leads to the rejection of a hierarchical structure for protection. The mechanisms or policies provided by the kernel can be classified according to several criteria, including: static (enforced at compile time) or dynamic (enforced at run time); pre-emptive or post-detection; according to the protection principles they satisfy (e.g., Denning); whether they are hardware supported or language based; whether they are more an open mechanism or a binding policy; and many more. Support for hierarchical protection domains is typically implemented using CPU modes. Many kernels provide implementation of "capabilities", i.e., objects that are provided to user code which allow limited access to an underlying object managed by the kernel. A common example is file handling: a file is a representation of information stored on a permanent storage device. The kernel may be able to perform many different operations, including read, write, delete or execute, but a user-level application may only be permitted to perform some of these operations (e.g., it may only be allowed to read the file). A common implementation of this is for the kernel to provide an object to the application (typically so called a "file handle") which the application may then invoke operations on, the validity of which the kernel checks at the time the operation is requested. Such a system may be extended to cover all objects that the kernel manages, and indeed to objects provided by other user applications. An efficient and simple way to provide hardware support of capabilities is to delegate to the memory management unit (MMU) the responsibility of checking access-rights for every memory access, a mechanism called capability-based addressing. Most commercial computer architectures lack such MMU support for capabilities. An alternative approach is to simulate capabilities using commonly supported hierarchical domains. In this approach, each protected object must reside in an address space that the application does not have access to; the kernel also maintains a list of capabilities in such memory. When an application needs to access an object protected by a capability, it performs a system call and the kernel then checks whether the application's capability grants it permission to perform the requested action, and if it is permitted performs the access for it (either directly, or by delegating the request to another user-level process). The performance cost of address space switching limits the practicality of this approach in systems with complex interactions between objects, but it is used in current operating systems for objects that are not accessed frequently or which are not expected to perform quickly. If the firmware does not support protection mechanisms, it is possible to simulate protection at a higher level, for example by simulating capabilities by manipulating page tables, but there are performance implications. Lack of hardware support may not be an issue, however, for systems that choose to use language-based protection. An important kernel design decision is the choice of the abstraction levels where the security mechanisms and policies should be implemented. Kernel security mechanisms play a critical role in supporting security at higher levels. One approach is to use firmware and kernel support for fault tolerance (see above), and build the security policy for malicious behavior on top of that (adding features such as cryptography mechanisms where necessary), delegating some responsibility to the compiler. Approaches that delegate enforcement of security policy to the compiler and/or the application level are often called language-based security. The lack of many critical security mechanisms in current mainstream operating systems impedes the implementation of adequate security policies at the application abstraction level. In fact, a common misconception in computer security is that any security policy can be implemented in an application regardless of kernel support. According to Mars Research Group developers, a lack of isolation is one of the main factors undermining kernel security. They propose their driver isolation framework for protection, primarily in the Linux kernel. Hardware- or language-based protection Typical computer systems today use hardware-enforced rules about what programs are allowed to access what data. The processor monitors the execution and stops a program that violates a rule, such as a user process that tries to write to kernel memory. In systems that lack support for capabilities, processes are isolated from each other by using separate address spaces. Calls from user processes into the kernel are regulated by requiring them to use one of the above-described system call methods. An alternative approach is to use language-based protection. In a language-based protection system, the kernel will only allow code to execute that has been produced by a trusted language compiler. The language may then be designed such that it is impossible for the programmer to instruct it to do something that will violate a security requirement. Advantages of this approach include: No need for separate address spaces. Switching between address spaces is a slow operation that causes a great deal of overhead, and a lot of optimization work is currently performed in order to prevent unnecessary switches in current operating systems. Switching is completely unnecessary in a language-based protection system, as all code can safely operate in the same address space. Flexibility. Any protection scheme that can be designed to be expressed via a programming language can be implemented using this method. Changes to the protection scheme (e.g. from a hierarchical system to a capability-based one) do not require new hardware. Disadvantages include: Longer application startup time. Applications must be verified when they are started to ensure they have been compiled by the correct compiler, or may need recompiling either from source code or from bytecode. Inflexible type systems. On traditional systems, applications frequently perform operations that are not type safe. Such operations cannot be permitted in a language-based protection system, which means that applications may need to be rewritten and may, in some cases, lose performance. Examples of systems with language-based protection include JX and Microsoft's Singularity. Process cooperation Edsger Dijkstra proved that from a logical point of view, atomic lock and unlock operations operating on binary semaphores are sufficient primitives to express any functionality of process cooperation. However this approach is generally held to be lacking in terms of safety and efficiency, whereas a message passing approach is more flexible. A number of other approaches (either lower- or higher-level) are available as well, with many modern kernels providing support for systems such as shared memory and remote procedure calls. I/O device management The idea of a kernel where I/O devices are handled uniformly with other processes, as parallel co-operating processes, was first proposed and implemented by Brinch Hansen (although similar ideas were suggested in 1967). In Hansen's description of this, the "common" processes are called internal processes, while the I/O devices are called external processes. Similar to physical memory, allowing applications direct access to controller ports and registers can cause the controller to malfunction, or system to crash. With this, depending on the complexity of the device, some devices can get surprisingly complex to program, and use several different controllers. Because of this, providing a more abstract interface to manage the device is important. This interface is normally done by a device driver or hardware abstraction layer. Frequently, applications will require access to these devices. The kernel must maintain the list of these devices by querying the system for them in some way. This can be done through the BIOS, or through one of the various system buses (such as PCI/PCIE, or USB). Using an example of a video driver, when an application requests an operation on a device, such as displaying a character, the kernel needs to send this request to the current active video driver. The video driver, in turn, needs to carry out this request. This is an example of inter-process communication (IPC). Kernel-wide design approaches The above listed tasks and features can be provided in many ways that differ from each other in design and implementation. The principle of separation of mechanism and policy is the substantial difference between the philosophy of micro and monolithic kernels. Here a mechanism is the support that allows the implementation of many different policies, while a policy is a particular "mode of operation". Example: Mechanism: User login attempts are routed to an authorization server Policy: Authorization server requires a password which is verified against stored passwords in a database Because the mechanism and policy are separated, the policy can be easily changed to e.g. require the use of a security token. In minimal microkernel just some very basic policies are included, and its mechanisms allows what is running on top of the kernel (the remaining part of the operating system and the other applications) to decide which policies to adopt (as memory management, high level process scheduling, file system management, etc.). A monolithic kernel instead tends to include many policies, therefore restricting the rest of the system to rely on them. Per Brinch Hansen presented arguments in favour of separation of mechanism and policy. The failure to properly fulfill this separation is one of the major causes of the lack of substantial innovation in existing operating systems, a problem common in computer architecture. The monolithic design is induced by the "kernel mode"/"user mode" architectural approach to protection (technically called hierarchical protection domains), which is common in conventional commercial systems; in fact, every module needing protection is therefore preferably included into the kernel. This link between monolithic design and "privileged mode" can be reconducted to the key issue of mechanism-policy separation; in fact the "privileged mode" architectural approach melds together the protection mechanism with the security policies, while the major alternative architectural approach, capability-based addressing, clearly distinguishes between the two, leading naturally to a microkernel design (see Separation of protection and security). While monolithic kernels execute all of their code in the same address space (kernel space), microkernels try to run most of their services in user space, aiming to improve maintainability and modularity of the codebase. Most kernels do not fit exactly into one of these categories, but are rather found in between these two designs. These are called hybrid kernels. More exotic designs such as nanokernels and exokernels are available, but are seldom used for production systems. The Xen hypervisor, for example, is an exokernel. Monolithic kernels In a monolithic kernel, all OS services run along with the main kernel thread, thus also residing in the same memory area. This approach provides rich and powerful hardware access. UNIX developer Ken Thompson stated that "it is in [his] opinion easier to implement a monolithic kernel". The main disadvantages of monolithic kernels are the dependencies between system components a bug in a device driver might crash the entire system and the fact that large kernels can become very difficult to maintain; Thompson also stated that "It is also easier for [a monolithic kernel] to turn into a mess in a hurry as it is modified." Monolithic kernels, which have traditionally been used by Unix-like operating systems, contain all the operating system core functions and the device drivers. A monolithic kernel is one single program that contains all of the code necessary to perform every kernel-related task. Every part which is to be accessed by most programs which cannot be put in a library is in the kernel space: Device drivers, scheduler, memory handling, file systems, and network stacks. Many system calls are provided to applications, to allow them to access all those services. A monolithic kernel, while initially loaded with subsystems that may not be needed, can be tuned to a point where it is as fast as or faster than the one that was specifically designed for the hardware, although more relevant in a general sense. Modern monolithic kernels, such as the Linux kernel, the FreeBSD kernel, the AIX kernel, the HP-UX kernel, and the Solaris kernel, all of which fall into the category of Unix-like operating systems, support loadable kernel modules, allowing modules to be loaded into the kernel at runtime, permitting easy extension of the kernel's capabilities as required, while helping to minimize the amount of code running in kernel space. Most work in the monolithic kernel is done via system calls. These are interfaces, usually kept in a tabular structure, that access some subsystem within the kernel such as disk operations. Essentially calls are made within programs and a checked copy of the request is passed through the system call. Hence, not far to travel at all. The monolithic Linux kernel can be made extremely small not only because of its ability to dynamically load modules but also because of its ease of customization. In fact, there are some versions that are small enough to fit together with a large number of utilities and other programs on a single floppy disk and still provide a fully functional operating system (one of the most popular of which is muLinux). This ability to miniaturize its kernel has also led to a rapid growth in the use of Linux in embedded systems. These types of kernels consist of the core functions of the operating system and the device drivers with the ability to load modules at runtime. They provide rich and powerful abstractions of the underlying hardware. They provide a small set of simple hardware abstractions and use applications called servers to provide more functionality. This particular approach defines a high-level virtual interface over the hardware, with a set of system calls to implement operating system services such as process management, concurrency and memory management in several modules that run in supervisor mode. This design has several flaws and limitations: Coding in kernel can be challenging, in part because one cannot use common libraries (like a full-featured libc), and because one needs to use a source-level debugger like gdb. Rebooting the computer is often required. This is not just a problem of convenience to the developers. When debugging is harder, and as difficulties become stronger, it becomes more likely that code will be "buggier". Bugs in one part of the kernel have strong side effects; since every function in the kernel has all the privileges, a bug in one function can corrupt data structure of another, totally unrelated part of the kernel, or of any running program. Kernels often become very large and difficult to maintain. Even if the modules servicing these operations are separate from the whole, the code integration is tight and difficult to do correctly. Since the modules run in the same address space, a bug can bring down the entire system. Microkernels Microkernel (also abbreviated μK or uK) is the term describing an approach to operating system design by which the functionality of the system is moved out of the traditional "kernel", into a set of "servers" that communicate through a "minimal" kernel, leaving as little as possible in "system space" and as much as possible in "user space". A microkernel that is designed for a specific platform or device is only ever going to have what it needs to operate. The microkernel approach consists of defining a simple abstraction over the hardware, with a set of primitives or system calls to implement minimal OS services such as memory management, multitasking, and inter-process communication. Other services, including those normally provided by the kernel, such as networking, are implemented in user-space programs, referred to as servers. Microkernels are easier to maintain than monolithic kernels, but the large number of system calls and context switches might slow down the system because they typically generate more overhead than plain function calls. Only parts which really require being in a privileged mode are in kernel space: IPC (Inter-Process Communication), basic scheduler, or scheduling primitives, basic memory handling, basic I/O primitives. Many critical parts are now running in user space: The complete scheduler, memory handling, file systems, and network stacks. Microkernels were invented as a reaction to traditional "monolithic" kernel design, whereby all system functionality was put in a one static program running in a special "system" mode of the processor. In the microkernel, only the most fundamental of tasks are performed such as being able to access some (not necessarily all) of the hardware, manage memory and coordinate message passing between the processes. Some systems that use microkernels are QNX and the HURD. In the case of QNX and Hurd user sessions can be entire snapshots of the system itself or views as it is referred to. The very essence of the microkernel architecture illustrates some of its advantages: Easier to maintain Patches can be tested in a separate instance, and then swapped in to take over a production instance. Rapid development time and new software can be tested without having to reboot the kernel. More persistence in general, if one instance goes haywire, it is often possible to substitute it with an operational mirror. Most microkernels use a message passing system to handle requests from one server to another. The message passing system generally operates on a port basis with the microkernel. As an example, if a request for more memory is sent, a port is opened with the microkernel and the request sent through. Once within the microkernel, the steps are similar to system calls. The rationale was that it would bring modularity in the system architecture, which would entail a cleaner system, easier to debug or dynamically modify, customizable to users' needs, and more performing. They are part of the operating systems like GNU Hurd, MINIX, MkLinux, QNX and Redox OS. Although microkernels are very small by themselves, in combination with all their required auxiliary code they are, in fact, often larger than monolithic kernels. Advocates of monolithic kernels also point out that the two-tiered structure of microkernel systems, in which most of the operating system does not interact directly with the hardware, creates a not-insignificant cost in terms of system efficiency. These types of kernels normally provide only the minimal services such as defining memory address spaces, inter-process communication (IPC) and the process management. The other functions such as running the hardware processes are not handled directly by microkernels. Proponents of microkernels point out those monolithic kernels have the disadvantage that an error in the kernel can cause the entire system to crash. However, with a microkernel, if a kernel process crashes, it is still possible to prevent a crash of the system as a whole by merely restarting the service that caused the error. Other services provided by the kernel such as networking are implemented in user-space programs referred to as servers. Servers allow the operating system to be modified by simply starting and stopping programs. For a machine without networking support, for instance, the networking server is not started. The task of moving in and out of the kernel to move data between the various applications and servers creates overhead which is detrimental to the efficiency of microkernels in comparison with monolithic kernels. Disadvantages in the microkernel exist however. Some are: Larger running memory footprint More software for interfacing is required, there is a potential for performance loss. Messaging bugs can be harder to fix due to the longer trip they have to take versus the one off copy in a monolithic kernel. Process management in general can be very complicated. The disadvantages for microkernels are extremely context-based. As an example, they work well for small single-purpose (and critical) systems because if not many processes need to run, then the complications of process management are effectively mitigated. A microkernel allows the implementation of the remaining part of the operating system as a normal application program written in a high-level language, and the use of different operating systems on top of the same unchanged kernel. It is also possible to dynamically switch among operating systems and to have more than one active simultaneously. Monolithic kernels vs. microkernels As the computer kernel grows, so grows the size and vulnerability of its trusted computing base; and, besides reducing security, there is the problem of enlarging the memory footprint. This is mitigated to some degree by perfecting the virtual memory system, but not all computer architectures have virtual memory support. To reduce the kernel's footprint, extensive editing has to be performed to carefully remove unneeded code, which can be very difficult with non-obvious interdependencies between parts of a kernel with millions of lines of code. By the early 1990s, due to the various shortcomings of monolithic kernels versus microkernels, monolithic kernels were considered obsolete by virtually all operating system researchers. As a result, the design of Linux as a monolithic kernel rather than a microkernel was the topic of a famous debate between Linus Torvalds and Andrew Tanenbaum. There is merit on both sides of the argument presented in the Tanenbaum–Torvalds debate. Performance Monolithic kernels are designed to have all of their code in the same address space (kernel space), which some developers argue is necessary to increase the performance of the system. Some developers also maintain that monolithic systems are extremely efficient if well written. The monolithic model tends to be more efficient through the use of shared kernel memory, rather than the slower IPC system of microkernel designs, which is typically based on message passing. The performance of microkernels was poor in both the 1980s and early 1990s. However, studies that empirically measured the performance of these microkernels did not analyze the reasons of such inefficiency. The explanations of this data were left to "folklore", with the assumption that they were due to the increased frequency of switches from "kernel-mode" to "user-mode", to the increased frequency of inter-process communication and to the increased frequency of context switches. In fact, as guessed in 1995, the reasons for the poor performance of microkernels might as well have been: (1) an actual inefficiency of the whole microkernel approach, (2) the particular concepts implemented in those microkernels, and (3) the particular implementation of those concepts. Therefore it remained to be studied if the solution to build an efficient microkernel was, unlike previous attempts, to apply the correct construction techniques. On the other end, the hierarchical protection domains architecture that leads to the design of a monolithic kernel has a significant performance drawback each time there's an interaction between different levels of protection (i.e., when a process has to manipulate a data structure both in "user mode" and "supervisor mode"), since this requires message copying by value. Hybrid (or modular) kernels Hybrid kernels are used in most commercial operating systems such as Microsoft Windows NT 3.1, NT 3.5, NT 3.51, NT 4.0, 2000, XP, Vista, 7, 8, 8.1 and 10. Apple's own macOS uses a hybrid kernel called XNU, which is based upon code from OSF/1's Mach kernel (OSFMK 7.3) and FreeBSD's monolithic kernel. Hybrid kernels are similar to microkernels, except they include some additional code in kernel-space to increase performance. These kernels represent a compromise that was implemented by some developers to accommodate the major advantages of both monolithic and microkernels. These types of kernels are extensions of microkernels with some properties of monolithic kernels. Unlike monolithic kernels, these types of kernels are unable to load modules at runtime on their own. This implies running some services (such as the network stack or the filesystem) in kernel space to reduce the performance overhead of a traditional microkernel, but still running kernel code (such as device drivers) as servers in user space. Many traditionally monolithic kernels are now at least adding (or else using) the module capability. The most well known of these kernels is the Linux kernel. The modular kernel essentially can have parts of it that are built into the core kernel binary or binaries that load into memory on demand. A code tainted module has the potential to destabilize a running kernel. It is possible to write a driver for a microkernel in a completely separate memory space and test it before "going" live. When a kernel module is loaded, it accesses the monolithic portion's memory space by adding to it what it needs, therefore, opening the doorway to possible pollution. A few advantages to the modular (or) Hybrid kernel are: Faster development time for drivers that can operate from within modules. No reboot required for testing (provided the kernel is not destabilized). On demand capability versus spending time recompiling a whole kernel for things like new drivers or subsystems. Faster integration of third party technology (related to development but pertinent unto itself nonetheless). Modules, generally, communicate with the kernel using a module interface of some sort. The interface is generalized (although particular to a given operating system) so it is not always possible to use modules. Often the device drivers may need more flexibility than the module interface affords. Essentially, it is two system calls and often the safety checks that only have to be done once in the monolithic kernel now may be done twice. Some of the disadvantages of the modular approach are: With more interfaces to pass through, the possibility of increased bugs exists (which implies more security holes). Maintaining modules can be confusing for some administrators when dealing with problems like symbol differences. Nanokernels A nanokernel delegates virtually all services including even the most basic ones like interrupt controllers or the timer to device drivers to make the kernel memory requirement even smaller than a traditional microkernel. Exokernels Exokernels are a still-experimental approach to operating system design. They differ from other types of kernels in limiting their functionality to the protection and multiplexing of the raw hardware, providing no hardware abstractions on top of which to develop applications. This separation of hardware protection from hardware management enables application developers to determine how to make the most efficient use of the available hardware for each specific program. Exokernels in themselves are extremely small. However, they are accompanied by library operating systems (see also unikernel), providing application developers with the functionalities of a conventional operating system. This comes down to every user writing their own rest-of-the kernel from near scratch, which is a very-risky, complex and quite a daunting assignment - particularly in a time-constrained production-oriented environment, which is why exokernels have never caught on. A major advantage of exokernel-based systems is that they can incorporate multiple library operating systems, each exporting a different API, for example one for high level UI development and one for real-time control. Multikernels A multikernel operating system treats a multi-core machine as a network of independent cores, as if it were a distributed system. It does not assume shared memory but rather implements inter-process communications as message passing. Barrelfish was the first operating system to be described as a multikernel. History of kernel development Early operating system kernels Strictly speaking, an operating system (and thus, a kernel) is not required to run a computer. Programs can be directly loaded and executed on the "bare metal" machine, provided that the authors of those programs are willing to work without any hardware abstraction or operating system support. Most early computers operated this way during the 1950s and early 1960s, which were reset and reloaded between the execution of different programs. Eventually, small ancillary programs such as program loaders and debuggers were left in memory between runs, or loaded from ROM. As these were developed, they formed the basis of what became early operating system kernels. The "bare metal" approach is still used today on some video game consoles and embedded systems, but in general, newer computers use modern operating systems and kernels. In 1969, the RC 4000 Multiprogramming System introduced the system design philosophy of a small nucleus "upon which operating systems for different purposes could be built in an orderly manner", what would be called the microkernel approach. Time-sharing operating systems In the decade preceding Unix, computers had grown enormously in power to the point where computer operators were looking for new ways to get people to use their spare time on their machines. One of the major developments during this era was time-sharing, whereby a number of users would get small slices of computer time, at a rate at which it appeared they were each connected to their own, slower, machine. The development of time-sharing systems led to a number of problems. One was that users, particularly at universities where the systems were being developed, seemed to want to hack the system to get more CPU time. For this reason, security and access control became a major focus of the Multics project in 1965. Another ongoing issue was properly handling computing resources: users spent most of their time staring at the terminal and thinking about what to input instead of actually using the resources of the computer, and a time-sharing system should give the CPU time to an active user during these periods. Finally, the systems typically offered a memory hierarchy several layers deep, and partitioning this expensive resource led to major developments in virtual memory systems. Amiga The Commodore Amiga was released in 1985, and was among the first and certainly most successful home computers to feature an advanced kernel architecture. The AmigaOS kernel's executive component, exec.library, uses a microkernel message-passing design, but there are other kernel components, like graphics.library, that have direct access to the hardware. There is no memory protection, and the kernel is almost always running in user mode. Only special actions are executed in kernel mode, and user-mode applications can ask the operating system to execute their code in kernel mode. Unix During the design phase of Unix, programmers decided to model every high-level device as a file, because they believed the purpose of computation was data transformation. For instance, printers were represented as a "file" at a known location when data was copied to the file, it printed out. Other systems, to provide a similar functionality, tended to virtualize devices at a lower level that is, both devices and files would be instances of some lower level concept. Virtualizing the system at the file level allowed users to manipulate the entire system using their existing file management utilities and concepts, dramatically simplifying operation. As an extension of the same paradigm, Unix allows programmers to manipulate files using a series of small programs, using the concept of pipes, which allowed users to complete operations in stages, feeding a file through a chain of single-purpose tools. Although the end result was the same, using smaller programs in this way dramatically increased flexibility as well as ease of development and use, allowing the user to modify their workflow by adding or removing a program from the chain. In the Unix model, the operating system consists of two parts: first, the huge collection of utility programs that drive most operations; second, the kernel that runs the programs. Under Unix, from a programming standpoint, the distinction between the two is fairly thin; the kernel is a program, running in supervisor mode, that acts as a program loader and supervisor for the small utility programs making up the rest of the system, and to provide locking and I/O services for these programs; beyond that, the kernel didn't intervene at all in user space. Over the years the computing model changed, and Unix's treatment of everything as a file or byte stream no longer was as universally applicable as it was before. Although a terminal could be treated as a file or a byte stream, which is printed to or read from, the same did not seem to be true for a graphical user interface. Networking posed another problem. Even if network communication can be compared to file access, the low-level packet-oriented architecture dealt with discrete chunks of data and not with whole files. As the capability of computers grew, Unix became increasingly cluttered with code. It is also because the modularity of the Unix kernel is extensively scalable. While kernels might have had 100,000 lines of code in the seventies and eighties, kernels like Linux, of modern Unix successors like GNU, have more than 13 million lines. Modern Unix-derivatives are generally based on module-loading monolithic kernels. Examples of this are the Linux kernel in the many distributions of GNU, IBM AIX, as well as the Berkeley Software Distribution variant kernels such as FreeBSD, DragonFly BSD, OpenBSD, NetBSD, and macOS. Apart from these alternatives, amateur developers maintain an active operating system development community, populated by self-written hobby kernels which mostly end up sharing many features with Linux, FreeBSD, DragonflyBSD, OpenBSD or NetBSD kernels and/or being compatible with them. Classic Mac OS and macOS Apple first launched its classic Mac OS in 1984, bundled with its Macintosh personal computer. Apple moved to a nanokernel design in Mac OS 8.6. Against this, the modern macOS (originally named Mac OS X) is based on Darwin, which uses a hybrid kernel called XNU, which was created by combining the 4.3BSD kernel and the Mach kernel. Microsoft Windows Microsoft Windows was first released in 1985 as an add-on to MS-DOS. Because of its dependence on another operating system, initial releases of Windows, prior to Windows 95, were considered an operating environment (not to be confused with an operating system). This product line continued to evolve through the 1980s and 1990s, with the Windows 9x series adding 32-bit addressing and pre-emptive multitasking; but ended with the release of Windows Me in 2000. Microsoft also developed Windows NT, an operating system with a very similar interface, but intended for high-end and business users. This line started with the release of Windows NT 3.1 in 1993, and was introduced to general users with the release of Windows XP in October 2001—replacing Windows 9x with a completely different, much more sophisticated operating system. This is the line that continues with Windows 11. The architecture of Windows NT's kernel is considered a hybrid kernel because the kernel itself contains tasks such as the Window Manager and the IPC Managers, with a client/server layered subsystem model. It was designed as a modified microkernel, as the Windows NT kernel was influenced by the Mach microkernel but does not meet all of the criteria of a pure microkernel. IBM Supervisor Supervisory program or supervisor is a computer program, usually part of an operating system, that controls the execution of other routines and regulates work scheduling, input/output operations, error actions, and similar functions and regulates the flow of work in a data processing system. Historically, this term was essentially associated with IBM's line of mainframe operating systems starting with OS/360. In other operating systems, the supervisor is generally called the kernel. In the 1970s, IBM further abstracted the supervisor state from the hardware, resulting in a hypervisor that enabled full virtualization, i.e. the capacity to run multiple operating systems on the same machine totally independently from each other. Hence the first such system was called Virtual Machine or VM. Development of microkernels Although Mach, developed by Richard Rashid at Carnegie Mellon University, is the best-known general-purpose microkernel, other microkernels have been developed with more specific aims. The L4 microkernel family (mainly the L3 and the L4 kernel) was created to demonstrate that microkernels are not necessarily slow. Newer implementations such as Fiasco and Pistachio are able to run Linux next to other L4 processes in separate address spaces. Additionally, QNX is a microkernel which is principally used in embedded systems, and the open-source software MINIX, while originally created for educational purposes, is now focused on being a highly reliable and self-healing microkernel OS.
Technology
Operating systems
null
21347057
https://en.wikipedia.org/wiki/Unix-like
Unix-like
A Unix-like (sometimes referred to as UN*X, *nix or *NIX) operating system is one that behaves in a manner similar to a Unix system, although not necessarily conforming to or being certified to any version of the Single UNIX Specification. A Unix-like application is one that behaves like the corresponding Unix command or shell. Although there are general philosophies for Unix design, there is no technical standard defining the term, and opinions can differ about the degree to which a particular operating system or application is Unix-like. Some well-known examples of Unix-like operating systems include Linux, FreeBSD and OpenBSD. These systems are often used on servers as well as on personal computers and other devices. Many popular applications, such as the Apache web server and the Bash shell, are also designed to be used on Unix-like systems. Definition The Open Group owns the UNIX trademark and administers the Single UNIX Specification, with the "UNIX" name being used as a certification mark. They do not approve of the construction "Unix-like", and consider it a misuse of their trademark. Their guidelines require "UNIX" to be presented in uppercase or otherwise distinguished from the surrounding text, strongly encourage using it as a branding adjective for a generic word such as "system", and discourage its use in hyphenated phrases. Other parties frequently treat "Unix" as a genericized trademark. Some add a wildcard character to the name to make an abbreviation like "Un*x" or "*nix", since Unix-like systems often have Unix-like names such as AIX, A/UX, HP-UX, IRIX, Linux, Minix, Ultrix, Xenix, and XNU. These patterns do not literally match many system names, but are still generally recognized to refer to any UNIX system, descendant, or work-alike, even those with completely dissimilar names such as Darwin/macOS, illumos/Solaris or FreeBSD. In 2007, Wayne R. Gray sued to dispute the status of UNIX as a trademark, but lost his case, and lost again on appeal, with the court upholding the trademark and its ownership. History "Unix-like" systems started to appear in the late 1970s and early 1980s. Many proprietary versions, such as Idris (1978), UNOS (1982), Coherent (1983), and UniFlex (1985), aimed to provide businesses with the functionality available to academic users of UNIX. When AT&T allowed relatively inexpensive commercial binary sublicensing of UNIX in 1979, a variety of proprietary systems were developed based on it, including AIX, HP-UX, IRIX, SunOS, Tru64, Ultrix, and Xenix. These largely displaced the proprietary clones. Growing incompatibility among these systems led to the creation of interoperability standards, including POSIX and the Single UNIX Specification. Various free, low-cost, and unrestricted substitutes for UNIX emerged in the 1980s and 1990s, including 4.4BSD, Linux, and Minix. Some of these have in turn been the basis for commercial "Unix-like" systems, such as BSD/OS and macOS. Several versions of (Mac) OS X/macOS running on Intel-based Mac computers have been certified under the Single UNIX Specification. The BSD variants are descendants of UNIX developed by the University of California at Berkeley, with UNIX source code from Bell Labs. However, the BSD code base has evolved since then, replacing all the AT&T code. Since the BSD variants are not certified as compliant with the Single UNIX Specification, they are referred to as "UNIX-like" rather than "UNIX". Categories Dennis Ritchie, one of the original creators of Unix, expressed his opinion that Unix-like systems such as Linux are de facto Unix systems. Eric S. Raymond and Rob Landley have suggested that there are three kinds of Unix-like systems: Genetic UNIX Those systems with a historical connection to the AT&T codebase. Most commercial UNIX systems fall into this category. So do the BSD systems, which are descendants of work done at the University of California, Berkeley in the late 1970s and early 1980s. Some of these systems have no original AT&T code but can still trace their ancestry to AT&T designs. Trademark or branded UNIX These systemslargely commercial in naturehave been determined by the Open Group to meet the Single UNIX Specification and are allowed to carry the UNIX name. Most such systems are commercial derivatives of the System V code base in one form or another, although Apple macOS 10.5 and later is a BSD variant that has been certified, and EulerOS and Inspur K-UX are Linux distributions that have been certified. A few other systems (such as IBM z/OS) earned the trademark through a POSIX compatibility layer and are not otherwise inherently Unix systems. Many ancient UNIX systems no longer meet this definition. Functional UNIX Broadly, any Unix-like system that behaves in a manner roughly consistent with the UNIX specification, including having a "program which manages your login and command line sessions"; more specifically, this can refer to systems such as Linux or Minix that behave similarly to a UNIX system but have no genetic or trademark connection to the AT&T code base. Most free/open-source implementations of the UNIX design, whether genetic UNIX or not, fall into the restricted definition of this third category due to the expense of obtaining Open Group certification, which costs thousands of dollars. Around 2001 Linux was given the opportunity to get a certification including free help from the POSIX chair Andrew Josey for the symbolic price of one dollar. There have been some activities to make Linux POSIX-compliant, with Josey having prepared a list of differences between the POSIX standard and the Linux Standard Base specification, but in August 2005, this project was shut down because of missing interest at the LSB work group. Compatibility layers Some non-Unix-like operating systems provide a Unix-like compatibility layer, with varying degrees of Unix-like functionality. IBM z/OS's UNIX System Services is sufficiently complete as to be certified as trademark UNIX. Cygwin, MSYS, and MSYS2 each provide a GNU environment on top of the Microsoft Windows user API, sufficient for most common open source software to be compiled and run. The MKS Toolkit and UWIN are comprehensive interoperability tools which allow the porting of Unix programs to Windows. Windows NT-type systems have a POSIX environmental subsystem. Subsystem for Unix-based Applications (previously Interix) provides Unix-like functionality as a Windows NT subsystem (discontinued). Windows Subsystem for Linux provides a Linux-compatible kernel interface developed by Microsoft and containing no Linux code, with Ubuntu user-mode binaries running on top of it. Windows Subsystem for Linux version 2 (WSL2) provides a fully functional Linux environment running in a virtual machine. OpenHarmony employs the third-party musl libc library and native APIs ports, providing support on POSIX for Linux syscalls within the Linux kernel and LiteOS default kernels side of the system multi-kernel Kernel Abstract Layer subsystem for vendor and developers interoperability. HarmonyOS with HarmonyOS NEXT system has OpenHarmony user mode that contains musl libc library and native APIs ports, providing support with POSIX for Linux syscalls within the default kernels of the Linux kernel standard system and LiteOS small and lightweight system side of the system multi-kernel Kernel Abstract Layer subsystem for interoperability on legacy Unix-like functionalities. Other means of Windows-Unix interoperability include: The above Windows packages can be used with various X servers for Windows Hummingbird Connectivity provides several ways for Windows machines to connect to Unix and Linux machines, from terminal emulators to X clients and servers, and others The Windows Resource Kits for versions of Windows NT include a Bourne Shell, some command-line tools, and a version of Perl Hamilton C shell is a version of csh written specifically for Windows.
Technology
Operating Systems
null
21347364
https://en.wikipedia.org/wiki/Unix
Unix
Unix (, ; trademarked as UNIX) is a family of multitasking, multi-user computer operating systems that derive from the original AT&T Unix, whose development started in 1969 at the Bell Labs research center by Ken Thompson, Dennis Ritchie, and others. Initially intended for use inside the Bell System, AT&T licensed Unix to outside parties in the late 1970s, leading to a variety of both academic and commercial Unix variants from vendors including University of California, Berkeley (BSD), Microsoft (Xenix), Sun Microsystems (SunOS/Solaris), HP/HPE (HP-UX), and IBM (AIX). The early versions of Unix—which are retrospectively referred to as "Research Unix"—ran on computers like the PDP-11 and VAX; Unix was commonly used on minicomputers and mainframes from the 1970s onwards. It distinguished itself from its predecessors as the first portable operating system: almost the entire operating system is written in the C programming language (in 1973), which allows Unix to operate on numerous platforms. Unix systems are characterized by a modular design that is sometimes called the "Unix philosophy". According to this philosophy, the operating system should provide a set of simple tools, each of which performs a limited, well-defined function. A unified and inode-based filesystem and an inter-process communication mechanism known as "pipes" serve as the main means of communication, and a shell scripting and command language (the Unix shell) is used to combine the tools to perform complex workflows. Version 7 in 1979 was the final widely released Research Unix, after which AT&T sold UNIX System III, based on Version 7, commercially in 1982; to avoid confusion between the Unix variants, AT&T combined various versions developed by others and released it as UNIX System V in 1983. However as these were closed-source, the University of California, Berkeley continued developing BSD as an alternative. Other vendors that were beginning to create commercialized versions of Unix would base their version on either System V (like Silicon Graphics's IRIX) or BSD (like SunOS). Amid the "Unix wars" of standardization, AT&T alongside Sun merged System V, BSD, SunOS and Xenix, soldifying their features into one package as UNIX System V Release 4 (SVR4) in 1989, and it was commercialized by Unix System Laboratories, an AT&T spinoff. A rival Unix by other vendors was released as OSF/1, however most commercial Unix vendors eventually changed their distributions to be based on SVR4 with BSD features added on top. AT&T sold Unix to Novell in 1992, who later sold the UNIX trademark to a new industry consortium called The Open Group which allow the use of the mark for certified operating systems that comply with the Single UNIX Specification (SUS). Since the 1990s, Unix systems have appeared on home-class computers: BSD/OS was the first to be commercialized for i386 computers and since then free Unix-like clones of existing systems have been developed, such as FreeBSD and the combination of Linux and GNU, the latter of which have since eclipsed Unix in popularity. Unix has been, until 2005, the most widely used server operating system. However in the present day, Unix distributions like IBM AIX, Oracle Solaris and OpenServer continue to be widely used in certain fields. Overview Unix was originally meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmers. The system grew larger as the operating system started spreading in academic circles, and as users added their own tools to the system and shared them with colleagues. At first, Unix was not designed to support multi-tasking or to be portable. Later, Unix gradually gained multi-tasking and multi-user capabilities in a time-sharing configuration, as well as portability. Unix systems are characterized by various concepts: the use of plain text for storing data; a hierarchical file system; treating devices and certain types of inter-process communication (IPC) as files; and the use of a large number of software tools, small programs that can be strung together through a command-line interpreter using pipes, as opposed to using a single monolithic program that includes all of the same functionality. These concepts are collectively known as the "Unix philosophy". Brian Kernighan and Rob Pike summarize this in The Unix Programming Environment as "the idea that the power of a system comes more from the relationships among programs than from the programs themselves". By the early 1980s, users began seeing Unix as a potential universal operating system, suitable for computers of all sizes. The Unix environment and the client–server program model were essential elements in the development of the Internet and the reshaping of computing as centered in networks rather than in individual computers. Both Unix and the C programming language were developed by AT&T and distributed to government and academic institutions, which led to both being ported to a wider variety of machine families than any other operating system. The Unix operating system consists of many libraries and utilities along with the master control program, the kernel. The kernel provides services to start and stop programs, handles the file system and other common "low-level" tasks that most programs share, and schedules access to avoid conflicts when programs try to access the same resource or device simultaneously. To mediate such access, the kernel has special rights, reflected in the distinction of kernel space from user space, the latter being a lower priority realm where most application programs operate. History The origins of Unix date back to the mid-1960s when the Massachusetts Institute of Technology, Bell Labs, and General Electric were developing Multics, a time-sharing operating system for the GE 645 mainframe computer. Multics featured several innovations, but also presented severe problems. Frustrated by the size and complexity of Multics, but not by its goals, individual researchers at Bell Labs started withdrawing from the project. The last to leave were Ken Thompson, Dennis Ritchie, Douglas McIlroy, and Joe Ossanna, who decided to reimplement their experiences in a new project of smaller scale. This new operating system was initially without organizational backing, and also without a name. The new operating system was a single-tasking system. In 1970, the group coined the name Unics for Uniplexed Information and Computing Service as a pun on Multics, which stood for Multiplexed Information and Computer Services. Brian Kernighan takes credit for the idea, but adds that "no one can remember" the origin of the final spelling Unix. Dennis Ritchie, Doug McIlroy, and Peter G. Neumann also credit Kernighan. The operating system was originally written in assembly language, but in 1973, Version 4 Unix was rewritten in C. Ken Thompson faced multiple challenges attempting the kernel port due to the evolving state of C, which lacked key features like structures at the time. Version 4 Unix, however, still had much PDP-11 specific code, and was not suitable for porting. The first port to another platform was a port of Version 6, made four years later (1977) at the University of Wollongong for the Interdata 7/32, followed by a Bell Labs port of Version 7 to the Interdata 8/32 during 1977 and 1978. Bell Labs produced several versions of Unix that are collectively referred to as Research Unix. In 1975, the first source license for UNIX was sold to Donald B. Gillies at the University of Illinois Urbana–Champaign (UIUC) Department of Computer Science. During the late 1970s and early 1980s, the influence of Unix in academic circles led to large-scale adoption of Unix (BSD and System V) by commercial startups, which in turn led to Unix fragmenting into multiple, similar — but often slightly and mutually incompatible — systems including DYNIX, HP-UX, SunOS/Solaris, AIX, and Xenix. In the late 1980s, AT&T Unix System Laboratories and Sun Microsystems developed System V Release 4 (SVR4), which was subsequently adopted by many commercial Unix vendors. In the 1990s, Unix and Unix-like systems grew in popularity and became the operating system of choice for over 90% of the world's top 500 fastest supercomputers, as BSD and Linux distributions were developed through collaboration by a worldwide network of programmers. In 2000, Apple released Darwin, also a Unix system, which became the core of the Mac OS X operating system, later renamed macOS. Unix-like operating systems are widely used in modern servers, workstations, and mobile devices. Standards In the late 1980s, an open operating system standardization effort now known as POSIX provided a common baseline for all operating systems; IEEE based POSIX around the common structure of the major competing variants of the Unix system, publishing the first POSIX standard in 1988. In the early 1990s, a separate but very similar effort was started by an industry consortium, the Common Open Software Environment (COSE) initiative, which eventually became the Single UNIX Specification (SUS) administered by The Open Group. Starting in 1998, the Open Group and IEEE started the Austin Group, to provide a common definition of POSIX and the Single UNIX Specification, which, by 2008, had become the Open Group Base Specification. In 1999, in an effort towards compatibility, several Unix system vendors agreed on SVR4's Executable and Linkable Format (ELF) as the standard for binary and object code files. The common format allows substantial binary compatibility among different Unix systems operating on the same CPU architecture. The Filesystem Hierarchy Standard was created to provide a reference directory layout for Unix-like operating systems; it has mainly been used in Linux. Components The Unix system is composed of several components that were originally packaged together. By including the development environment, libraries, documents and the portable, modifiable source code for all of these components, in addition to the kernel of an operating system, Unix was a self-contained software system. This was one of the key reasons it emerged as an important teaching and learning tool and has had a broad influence. See , below. The inclusion of these components did not make the system large the original V7 UNIX distribution, consisting of copies of all of the compiled binaries plus all of the source code and documentation occupied less than 10 MB and arrived on a single nine-track magnetic tape, earning its reputation as a portable system. The printed documentation, typeset from the online sources, was contained in two volumes. The names and filesystem locations of the Unix components have changed substantially across the history of the system. Nonetheless, the V7 implementation has the canonical early structure: Kernel source code in /usr/sys, composed of several sub-components: conf configuration and machine-dependent parts, including boot code dev device drivers for control of hardware (and some pseudo-hardware) sys operating system "kernel", handling memory management, process scheduling, system calls, etc. h header files, defining key structures within the system and important system-specific invariables Development environment early versions of Unix contained a development environment sufficient to recreate the entire system from source code: ed text editor, for creating source code files cc C language compiler (first appeared in V3 Unix) as machine-language assembler for the machine ld linker, for combining object files lib object-code libraries (installed in /lib or /usr/lib). libc, the system library with C run-time support, was the primary library, but there have always been additional libraries for things such as mathematical functions (libm) or database access. V7 Unix introduced the first version of the modern "Standard I/O" library stdio as part of the system library. Later implementations increased the number of libraries significantly. make build manager (introduced in PWB/UNIX), for effectively automating the build process include header files for software development, defining standard interfaces and system invariants Other languages V7 Unix contained a Fortran-77 compiler, a programmable arbitrary-precision calculator (bc, dc), and the awk scripting language; later versions and implementations contain many other language compilers and toolsets. Early BSD releases included Pascal tools, and many modern Unix systems also include the GNU Compiler Collection as well as or instead of a proprietary compiler system. Other tools including an object-code archive manager (ar), symbol-table lister (nm), compiler-development tools (e.g. lex & yacc), and debugging tools. Commands Unix makes little distinction between commands (user-level programs) for system operation and maintenance (e.g. cron), commands of general utility (e.g. grep), and more general-purpose applications such as the text formatting and typesetting package. Nonetheless, some major categories are: sh the "shell" programmable command-line interpreter, the primary user interface on Unix before window systems appeared, and even afterward (within a "command window"). Utilities the core toolkit of the Unix command set, including cp, ls, grep, find and many others. Subcategories include: System utilities administrative tools such as mkfs, fsck, and many others. User utilities environment management tools such as passwd, kill, and others. Document formatting Unix systems were used from the outset for document preparation and typesetting systems, and included many related programs such as nroff, troff, tbl, eqn, refer, and pic. Some modern Unix systems also include packages such as TeX and Ghostscript. Graphics the plot subsystem provided facilities for producing simple vector plots in a device-independent format, with device-specific interpreters to display such files. Modern Unix systems also generally include X11 as a standard windowing system and GUI, and many support OpenGL. Communications early Unix systems contained no inter-system communication, but did include the inter-user communication programs mail and write. V7 introduced the early inter-system communication system UUCP, and systems beginning with BSD release 4.1c included TCP/IP utilities. Documentation Unix was one of the first operating systems to include all of its documentation online in machine-readable form. The documentation included: man manual pages for each command, library component, system call, header file, etc. doc longer documents detailing major subsystems, such as the C language and troff Impact The Unix system had a significant impact on other operating systems. It achieved its reputation by its interactivity, by providing the software at a nominal fee for educational use, by running on inexpensive hardware, and by being easy to adapt and move to different machines. Unix was originally written in assembly language, but was soon rewritten in C, a high-level programming language. Although this followed the lead of CTSS, Multics and Burroughs MCP, it was Unix that popularized the idea. Unix had a drastically simplified file model compared to many contemporary operating systems: treating all kinds of files as simple byte arrays. The file system hierarchy contained machine services and devices (such as printers, terminals, or disk drives), providing a uniform interface, but at the expense of occasionally requiring additional mechanisms such as ioctl and mode flags to access features of the hardware that did not fit the simple "stream of bytes" model. The Plan 9 operating system pushed this model even further and eliminated the need for additional mechanisms. Unix also popularized the hierarchical file system with arbitrarily nested subdirectories, originally introduced by Multics. Other common operating systems of the era had ways to divide a storage device into multiple directories or sections, but they had a fixed number of levels, often only one level. Several major proprietary operating systems eventually added recursive subdirectory capabilities also patterned after Multics. DEC's RSX-11M's "group, user" hierarchy evolved into OpenVMS directories, CP/M's volumes evolved into MS-DOS 2.0+ subdirectories, and HP's MPE group.account hierarchy and IBM's SSP and OS/400 library systems were folded into broader POSIX file systems. Making the command interpreter an ordinary user-level program, with additional commands provided as separate programs, was another Multics innovation popularized by Unix. The Unix shell used the same language for interactive commands as for scripting (shell scripts – there was no separate job control language like IBM's JCL). Since the shell and OS commands were "just another program", the user could choose (or even write) their own shell. New commands could be added without changing the shell itself. Unix's innovative command-line syntax for creating modular chains of producer-consumer processes (pipelines) made a powerful programming paradigm (coroutines) widely available. Many later command-line interpreters have been inspired by the Unix shell. A fundamental simplifying assumption of Unix was its focus on newline-delimited text for nearly all file formats. There were no "binary" editors in the original version of Unix – the entire system was configured using textual shell command scripts. The common denominator in the I/O system was the byte – unlike "record-based" file systems. The focus on text for representing nearly everything made Unix pipes especially useful and encouraged the development of simple, general tools that could easily be combined to perform more complicated ad hoc tasks. The focus on text and bytes made the system far more scalable and portable than other systems. Over time, text-based applications have also proven popular in application areas, such as printing languages (PostScript, ODF), and at the application layer of the Internet protocols, e.g., FTP, SMTP, HTTP, SOAP, and SIP. Unix popularized a syntax for regular expressions that found widespread use. The Unix programming interface became the basis for a widely implemented operating system interface standard (POSIX, see above). The C programming language soon spread beyond Unix, and is now ubiquitous in systems and applications programming. Early Unix developers were important in bringing the concepts of modularity and reusability into software engineering practice, spawning a "software tools" movement. Over time, the leading developers of Unix (and programs that ran on it) established a set of cultural norms for developing software, norms which became as important and influential as the technology of Unix itself; this has been termed the Unix philosophy. The TCP/IP networking protocols were quickly implemented on the Unix versions widely used on relatively inexpensive computers, which contributed to the Internet explosion of worldwide, real-time connectivity and formed the basis for implementations on many other platforms. The Unix policy of extensive on-line documentation and (for many years) ready access to all system source code raised programmer expectations, and contributed to the launch of the free software movement in 1983. Free Unix and Unix-like variants In 1983, Richard Stallman announced the GNU (short for "GNU's Not Unix") project, an ambitious effort to create a free software Unix-like system—"free" in the sense that everyone who received a copy would be free to use, study, modify, and redistribute it. The GNU project's own kernel development project, GNU Hurd, had not yet produced a working kernel, but in 1991 Linus Torvalds released the Linux kernel as free software under the GNU General Public License. In addition to their use in the GNU operating system, many GNU packages – such as the GNU Compiler Collection (and the rest of the GNU toolchain), the GNU C library and the GNU Core Utilities – have gone on to play central roles in other free Unix systems as well. Linux distributions, consisting of the Linux kernel and large collections of compatible software have become popular both with individual users and in business. Popular distributions include Red Hat Enterprise Linux, Fedora, SUSE Linux Enterprise, openSUSE, Debian, Ubuntu, Linux Mint, Slackware Linux, Arch Linux and Gentoo. A free derivative of BSD Unix, 386BSD, was released in 1992 and led to the NetBSD and FreeBSD projects. With the 1994 settlement of a lawsuit brought against the University of California and Berkeley Software Design Inc. (USL v. BSDi) by Unix System Laboratories, it was clarified that Berkeley had the right to distribute BSD Unix for free if it so desired. Since then, BSD Unix has been developed in several different product branches, including OpenBSD and DragonFly BSD. Because of the modular design of the Unix model, sharing components is relatively common: most or all Unix and Unix-like systems include at least some BSD code, while some include GNU utilities in their distributions. Linux and BSD Unix are increasingly filling market needs traditionally served by proprietary Unix operating systems, expanding into new markets such as the consumer desktop, mobile devices and embedded devices. In a 1999 interview, Dennis Ritchie voiced his opinion that Linux and BSD Unix operating systems are a continuation of the basis of the Unix design and are derivatives of Unix: In the same interview, he states that he views both Unix and Linux as "the continuation of ideas that were started by Ken and me and many others, many years ago". OpenSolaris was the free software counterpart to Solaris developed by Sun Microsystems, which included a CDDL-licensed kernel and a primarily GNU userland. However, Oracle discontinued the project upon their acquisition of Sun, which prompted a group of former Sun employees and members of the OpenSolaris community to fork OpenSolaris into the illumos kernel. As of 2014, illumos remains the only active, open-source System V derivative. ARPANET In May 1975, RFC 681 described the development of Network Unix by the Center for Advanced Computation at the University of Illinois Urbana-Champaign. The Unix system was said to "present several interesting capabilities as an ARPANET mini-host". At the time, Unix required a license from Bell Telephone Laboratories that cost US$20,000 for non-university institutions, while universities could obtain a license for a nominal fee of $150. It was noted that Bell was "open to suggestions" for an ARPANET-wide license. The RFC specifically mentions that Unix "offers powerful local processing facilities in terms of user programs, several compilers, an editor based on QED, a versatile document preparation system, and an efficient file system featuring sophisticated access control, mountable and de-mountable volumes, and a unified treatment of peripherals as special files." The latter permitted the Network Control Program (NCP) to be integrated within the Unix file system, treating network connections as special files that could be accessed through standard Unix I/O calls, which included the added benefit of closing all connections on program exit, should the user neglect to do so. In order "to minimize the amount of code added to the basic Unix kernel", much of the NCP code ran in a swappable user process, running only when needed. Branding AT&T did not allow licensees to use the Unix name; thus Microsoft called its variant Xenix, for example. In October 1993, Novell, the company that owned the rights to the Unix System V source at the time, transferred the trademarks of Unix to the X/Open Company (now The Open Group), and in 1995 sold the related business operations to Santa Cruz Operation (SCO). Whether Novell also sold the copyrights to the actual software was the subject of a federal lawsuit in 2006, SCO v. Novell, which Novell won. The case was appealed, but on August 30, 2011, the United States Court of Appeals for the Tenth Circuit affirmed the trial decisions, closing the case. Unix vendor SCO Group Inc. accused Novell of slander of title. The present owner of the trademark UNIX is The Open Group, an industry standards consortium. Only systems fully compliant with and certified to the Single UNIX Specification qualify as "UNIX" (others are called "Unix-like"). By decree of The Open Group, the term "UNIX" refers more to a class of operating systems than to a specific implementation of an operating system; those operating systems which meet The Open Group's Single UNIX Specification should be able to bear the UNIX 98 or UNIX 03 trademarks today, after the operating system's vendor pays a substantial certification fee and annual trademark royalties to The Open Group. Systems that have been licensed to use the UNIX trademark include AIX, EulerOS, HP-UX, Inspur K-UX, IRIX, macOS, Solaris, Tru64 UNIX (formerly "Digital UNIX", or OSF/1), and z/OS. Notably, EulerOS and Inspur K-UX are Linux distributions certified as UNIX 03 compliant. Sometimes a representation like Un*x, *NIX, or *N?X is used to indicate all operating systems similar to Unix. This comes from the use of the asterisk (*) and the question mark characters as wildcard indicators in many utilities. This notation is also used to describe other Unix-like systems that have not met the requirements for UNIX branding from the Open Group. The Open Group requests that UNIX always be used as an adjective followed by a generic term such as system to help avoid the creation of a genericized trademark. Unix was the original formatting, but the usage of UNIX remains widespread because it was once typeset in small caps (Unix). According to Dennis Ritchie, when presenting the original Unix paper to the third Operating Systems Symposium of the American Association for Computing Machinery (ACM), "we had a new typesetter and troff had just been invented and we were intoxicated by being able to produce small caps". Many of the operating system's predecessors and contemporaries used all-uppercase lettering, so many people wrote the name in upper case due to force of habit. It is not an acronym. Trademark names can be registered by different entities in different countries and trademark laws in some countries allow the same trademark name to be controlled by two different entities if each entity uses the trademark in easily distinguishable categories. The result is that Unix has been used as a brand name for various products including bookshelves, ink pens, bottled glue, diapers, hair driers and food containers. Several plural forms of Unix are used casually to refer to multiple brands of Unix and Unix-like systems. Most common is the conventional Unixes, but Unices, treating Unix as a Latin noun of the third declension, is also popular. The pseudo-Anglo-Saxon plural form Unixen is not common, although occasionally seen. Sun Microsystems, developer of the Solaris variant, has asserted that the term Unix is itself plural, referencing its many implementations.
Technology
Operating systems
null
21347411
https://en.wikipedia.org/wiki/Chemical%20compound
Chemical compound
A chemical compound is a chemical substance composed of many identical molecules (or molecular entities) containing atoms from more than one chemical element held together by chemical bonds. A molecule consisting of atoms of only one element is therefore not a compound. A compound can be transformed into a different substance by a chemical reaction, which may involve interactions with other substances. In this process, bonds between atoms may be broken or new bonds formed or both. There are four major types of compounds, distinguished by how the constituent atoms are bonded together. Molecular compounds are held together by covalent bonds; ionic compounds are held together by ionic bonds; intermetallic compounds are held together by metallic bonds; coordination complexes are held together by coordinate covalent bonds. Non-stoichiometric compounds form a disputed marginal case. A chemical formula specifies the number of atoms of each element in a compound molecule, using the standard chemical symbols with numerical subscripts. Many chemical compounds have a unique CAS number identifier assigned by the Chemical Abstracts Service. Globally, more than 350,000 chemical compounds (including mixtures of chemicals) have been registered for production and use. History of the concept Robert Boyle The term "compound"—with a meaning similar to the modern—has been used at least since 1661 when Robert Boyle's The Sceptical Chymist was published. In this book, Boyle variously used the terms "compound", "compounded body", "perfectly mixt body", and "concrete". "Perfectly mixt bodies" included for example gold, lead, mercury, and wine. While the distinction between compound and mixture is not so clear, the distinction between element and compound is a central theme. Corpuscles of elements and compounds Boyle used the concept of "corpuscles"—or "atomes", as he also called them—to explain how a limited number of elements could combine into a vast number of compounds: Isaac Watts In his Logick, published in 1724, the English minister and logician Isaac Watts gave an early definition of chemical element, and contrasted element with chemical compound in clear, modern terms. Definitions Any substance consisting of two or more different types of atoms (chemical elements) in a fixed stoichiometric proportion can be termed a chemical compound; the concept is most readily understood when considering pure chemical substances. It follows from their being composed of fixed proportions of two or more types of atoms that chemical compounds can be converted, via chemical reaction, into compounds or substances each having fewer atoms. A chemical formula is a way of expressing information about the proportions of atoms that constitute a particular chemical compound, using chemical symbols for the chemical elements, and subscripts to indicate the number of atoms involved. For example, water is composed of two hydrogen atoms bonded to one oxygen atom: the chemical formula is H2O. In the case of non-stoichiometric compounds, the proportions may be reproducible with regard to their preparation, and give fixed proportions of their component elements, but proportions that are not integral [e.g., for palladium hydride, PdHx (0.02 < x < 0.58)]. Chemical compounds have a unique and defined chemical structure held together in a defined spatial arrangement by chemical bonds. Chemical compounds can be molecular compounds held together by covalent bonds, salts held together by ionic bonds, intermetallic compounds held together by metallic bonds, or the subset of chemical complexes that are held together by coordinate covalent bonds. Pure chemical elements are generally not considered chemical compounds, failing the two or more atom requirement, though they often consist of molecules composed of multiple atoms (such as in the diatomic molecule H2, or the polyatomic molecule S8, etc.). Many chemical compounds have a unique numerical identifier assigned by the Chemical Abstracts Service (CAS): its CAS number. There is varying and sometimes inconsistent nomenclature differentiating substances, which include truly non-stoichiometric examples, from chemical compounds, which require the fixed ratios. Many solid chemical substances—for example many silicate minerals—are chemical substances, but do not have simple formulae reflecting chemically bonding of elements to one another in fixed ratios; even so, these crystalline substances are often called "non-stoichiometric compounds". It may be argued that they are related to, rather than being chemical compounds, insofar as the variability in their compositions is often due to either the presence of foreign elements trapped within the crystal structure of an otherwise known true chemical compound, or due to perturbations in structure relative to the known compound that arise because of an excess of deficit of the constituent elements at places in its structure; such non-stoichiometric substances form most of the crust and mantle of the Earth. Other compounds regarded as chemically identical may have varying amounts of heavy or light isotopes of the constituent elements, which changes the ratio of elements by mass slightly. Types Molecules A molecule is an electrically neutral group of two or more atoms held together by chemical bonds. A molecule may be homonuclear, that is, it consists of atoms of one chemical element, as with two atoms in the oxygen molecule (O2); or it may be heteronuclear, a chemical compound composed of more than one element, as with water (two hydrogen atoms and one oxygen atom; H2O). A molecule is the smallest unit of a substance that still carries all the physical and chemical properties of that substance. Ionic compounds An ionic compound is a chemical compound composed of ions held together by electrostatic forces termed ionic bonding. The compound is neutral overall, but consists of positively charged ions called cations and negatively charged ions called anions. These can be simple ions such as the sodium (Na+) and chloride (Cl−) in sodium chloride, or polyatomic species such as the ammonium () and carbonate () ions in ammonium carbonate. Individual ions within an ionic compound usually have multiple nearest neighbours, so are not considered to be part of molecules, but instead part of a continuous three-dimensional network, usually in a crystalline structure. Ionic compounds containing basic ions hydroxide (OH−) or oxide (O2−) are classified as bases. Ionic compounds without these ions are also known as salts and can be formed by acid–base reactions. Ionic compounds can also be produced from their constituent ions by evaporation of their solvent, precipitation, freezing, a solid-state reaction, or the electron transfer reaction of reactive metals with reactive non-metals, such as halogen gases. Ionic compounds typically have high melting and boiling points, and are hard and brittle. As solids they are almost always electrically insulating, but when melted or dissolved they become highly conductive, because the ions are mobilized. Intermetallic compounds An intermetallic compound is a type of metallic alloy that forms an ordered solid-state compound between two or more metallic elements. Intermetallics are generally hard and brittle, with good high-temperature mechanical properties. They can be classified as stoichiometric or nonstoichiometric intermetallic compounds. Complexes A coordination complex consists of a central atom or ion, which is usually metallic and is called the coordination centre, and a surrounding array of bound molecules or ions, that are in turn known as ligands or complexing agents. Many metal-containing compounds, especially those of transition metals, are coordination complexes. A coordination complex whose centre is a metal atom is called a metal complex of d block element. Bonding and forces Compounds are held together through a variety of different types of bonding and forces. The differences in the types of bonds in compounds differ based on the types of elements present in the compound. London dispersion forces are the weakest force of all intermolecular forces. They are temporary attractive forces that form when the electrons in two adjacent atoms are positioned so that they create a temporary dipole. Additionally, London dispersion forces are responsible for condensing non polar substances to liquids, and to further freeze to a solid state dependent on how low the temperature of the environment is. A covalent bond, also known as a molecular bond, involves the sharing of electrons between two atoms. Primarily, this type of bond occurs between elements that fall close to each other on the periodic table of elements, yet it is observed between some metals and nonmetals. This is due to the mechanism of this type of bond. Elements that fall close to each other on the periodic table tend to have similar electronegativities, which means they have a similar affinity for electrons. Since neither element has a stronger affinity to donate or gain electrons, it causes the elements to share electrons so both elements have a more stable octet. Ionic bonding occurs when valence electrons are completely transferred between elements. Opposite to covalent bonding, this chemical bond creates two oppositely charged ions. The metals in ionic bonding usually lose their valence electrons, becoming a positively charged cation. The nonmetal will gain the electrons from the metal, making the nonmetal a negatively charged anion. As outlined, ionic bonds occur between an electron donor, usually a metal, and an electron acceptor, which tends to be a nonmetal. Hydrogen bonding occurs when a hydrogen atom bonded to an electronegative atom forms an electrostatic connection with another electronegative atom through interacting dipoles or charges. Reactions A compound can be converted to a different chemical composition by interaction with a second chemical compound via a chemical reaction. In this process, bonds between atoms are broken in both of the interacting compounds, and then bonds are reformed so that new associations are made between atoms. Schematically, this reaction could be described as , where A, B, C, and D are each unique atoms; and AB, AD, CD, and CB are each unique compounds.
Physical sciences
Chemistry
null
21347643
https://en.wikipedia.org/wiki/Mac%20operating%20systems
Mac operating systems
Mac operating systems were developed by Apple Inc. in a succession of two major series. In 1984, Apple debuted the operating system that is now known as the classic Mac OS with its release of the original Macintosh System Software. The system, rebranded Mac OS in 1997, was pre-installed on every Macintosh until 2002 and offered on Macintosh clones shortly in the 1990s. It was noted for its ease of use, and also criticized for its lack of modern technologies compared to its competitors. The current Mac operating system is macOS, originally named Mac OS X until 2012 and then OS X until 2016. It was developed between 1997 and 2001 after Apple's purchase of NeXT. It brought an entirely new architecture based on NeXTSTEP, a Unix system, that eliminated many of the technical challenges that the classic Mac OS faced, such as problems with memory management. The current macOS is pre-installed with every Mac and receives a major update annually. It is the basis of Apple's current system software for its other devices – iOS, iPadOS, watchOS, and tvOS. Prior to the introduction of Mac OS X, Apple experimented with several other concepts, releasing different products designed to bring the Macintosh interface or applications to Unix-like systems or vice versa, A/UX, MAE, and MkLinux. Apple's effort to expand upon and develop a replacement for its classic Mac OS in the 1990s led to a few cancelled projects, code named Star Trek, Taligent, and Copland. Although the classic Mac OS and macOS (Mac OS X) have different architectures, they share a common set of GUI principles, including a menu bar across the top of the screen; the Finder shell, featuring a desktop metaphor that represents files and applications using icons and relates concepts like directories and file deletion to real-world objects like folders and a trash can; and overlapping windows for multitasking. Before the arrival of the Macintosh in 1984, Apple's history of operating systems began with its Apple II computers in 1977, which run Apple DOS, ProDOS, and GS/OS; the Apple III in 1980 runs Apple SOS; and the Lisa in 1983 which runs Lisa OS and later MacWorks XL, a Macintosh emulator. Apple developed the Newton OS for its Newton personal digital assistant from 1993 to 1997. Apple launched several new operating systems based on the core of macOS, including iOS in 2007 for its iPhone, iPad, and iPod Touch mobile devices and in 2017 for its HomePod smart speakers; watchOS in 2015 for the Apple Watch; and tvOS in 2015 for the Apple TV set-top box. Classic Mac OS The classic Mac OS is the original Macintosh operating system introduced in 1984 alongside the first Macintosh and remained in primary use on Macs until Mac OS X in 2001. Apple released the original Macintosh on January 24, 1984; its early system software is partially based on Lisa OS, and inspired by the Alto computer, which former Apple CEO Steve Jobs previewed at Xerox PARC. It was originally named "System Software", or simply "System"; Apple rebranded it as "Mac OS" in 1996 due in part to its Macintosh clone program that ended one year later. Classic Mac OS is characterized by its monolithic design. Initial versions of the System Software run one application at a time. System 5 introduced cooperative multitasking. System 7 supports 32-bit memory addressing and virtual memory, allowing larger programs. Later updates to the System 7 enable the transition to the PowerPC architecture. The system was considered user-friendly, but its architectural limitations were critiqued, such as limited memory management, lack of protected memory and access controls, and susceptibility to conflicts among extensions. Releases Nine major versions of the classic Mac OS were released. The name "Classic" that now signifies the system as a whole is a reference to a compatibility layer that helped ease the transition to Mac OS X. Macintosh System Software – "System 1", released in 1984 System Software 2, 3, and 4 – released between 1985 and 1987 System Software 5 – released in 1987 System Software 6 – released in 1988 System 7 / Mac OS 7.6 – released in 1991 Mac OS 8 – released in 1997 Mac OS 9 – final major version, released in 1999 Mac OS X, OS X, and macOS The system was launched as Mac OS X, renamed OS X from 20122016, and then renamed macOS as the current Mac operating system that officially succeeded the classic Mac OS in 2001. The system was originally marketed as simply "version 10" of Mac OS, but it has a history that is largely independent of the classic Mac OS. It is a Unix-based operating system built on NeXTSTEP and other NeXT technology from the late 1980s until early 1997, when Apple purchased the company and its CEO Steve Jobs returned to Apple. Precursors to Mac OS X include OPENSTEP, Apple's Rhapsody project, and the Mac OS X Public Beta. macOS is based on Apple's open source Darwin operating system, which is based on the XNU kernel and BSD. macOS is the basis for some of Apple's other operating systems, including iPhone OS/iOS, iPadOS, watchOS, tvOS, and visionOS. Releases Desktop The first version of the system was released on March 24, 2001, supporting the Aqua user interface. Since then, several more versions adding newer features and technologies have been released. Since 2011, new releases have been offered annually. Mac OS X 10.0 – codenamed "Cheetah", released Saturday, March 24, 2001 Mac OS X 10.1 – codenamed "Puma", released Tuesday, September 25, 2001 Mac OS X Jaguar – version 10.2, released Friday, August 23, 2002 Mac OS X Panther – version 10.3, released Friday, October 24, 2003 Mac OS X Tiger – version 10.4, released Friday, April 29, 2005 Mac OS X Leopard – version 10.5, released Friday, October 26, 2007 Mac OS X Snow Leopard – version 10.6, publicly unveiled on Monday, June 8, 2009 Mac OS X Lion – version 10.7, released Wednesday, July 20, 2011 OS X Mountain Lion – version 10.8, released Wednesday, July 25, 2012 OS X Mavericks – version 10.9, released Tuesday, October 22, 2013 OS X Yosemite – version 10.10, released Thursday, October 16, 2014 OS X El Capitan – version 10.11, released Wednesday, September 30, 2015 macOS Sierra – version 10.12, released Tuesday, September 20, 2016 macOS High Sierra – version 10.13, released Monday, September 25, 2017 macOS Mojave – version 10.14, released Monday, September 24, 2018 macOS Catalina – version 10.15, released Monday, October 7, 2019 macOS Big Sur – version 11, released Thursday, November 12, 2020 macOS Monterey – version 12, released Monday, October 25, 2021 macOS Ventura – version 13, released Monday, October 24, 2022 macOS Sonoma - version 14, released Tuesday, September 26, 2023 macOS Sequoia - version 15, released Monday, September 16, 2024 macOS 10.16's version number was updated to 11.0 in the third beta. The third beta version of macOS Big Sur is 11.0 Beta 3 instead of 10.16 Beta 3. Server An early server computing version of the system was released in 1999 as a technology preview. It was followed by several more official server-based releases. Server functionality has instead been offered as an add-on for the desktop system since 2011. Mac OS X Server 1.0 – code named "Hera", released in 1999 Mac OS X Server – later called "OS X Server" and "macOS Server", released between 2001 and 2022. Other projects Shipped A/ROSE The Apple Real-time Operating System Environment (A/ROSE) is a small embedded operating system which runs on the Macintosh Coprocessor Platform, an expansion card for the Macintosh. It is a single "overdesigned" hardware platform on which third-party vendors build practically any product, reducing the otherwise heavy workload of developing a NuBus-based expansion card. The first version of the system was ready for use in February 1988. A/UX In 1988, Apple released its first UNIX-based OS, A/UX, which is a UNIX operating system with the Mac OS look and feel. It was not very competitive for its time, due in part to the crowded UNIX market and Macintosh hardware lacking high-end design features present on workstation-class computers. Most of its sales was to the U.S. government, where MacOS lacks POSIX compliance. MAE The Macintosh Application Environment (MAE) is a software package introduced by Apple in 1994 that allows certain Unix-based computer workstations to run Macintosh applications. MAE uses the X Window System to emulate a Macintosh Finder-style graphical user interface. The last version, MAE 3.0, is compatible with System 7.5.3. MAE was published for Sun Microsystems SPARCstation and Hewlett-Packard systems. It was discontinued on May 14, 1998. MkLinux Announced at the 1996 Worldwide Developers Conference (WWDC), MkLinux is an open source operating system that was started by the OSF Research Institute and Apple in February 1996 to port Linux to the PowerPC platform, and thus Macintosh computers. In mid 1998, the community-led MkLinux Developers Association took over development of the operating system. MkLinux is short for "Microkernel Linux", which refers to its adaptation of the monolithic Linux kernel to run as a server hosted atop the Mach microkernel version 3.0. Cancelled projects Star Trek The Star Trek project (as in "to boldly go where no Mac has gone before") was a secret prototype beginning in 1992, to port the classic Mac OS to Intel-compatible x86 personal computers. In partnership with Apple and with support from Intel, the project was instigated by Novell, which was looking to integrate its DR-DOS with the Mac OS GUI as a mutual response to the monopoly of Microsoft's Windows 3.0 and MS-DOS. A team consisting of four from Apple and four from Novell was got the Macintosh Finder and some basic applications such as QuickTime, running smoothly. The project was canceled one year later in early 1993, but was partially reused when porting the Mac OS to PowerPC. Taligent Taligent (a portmanteau of "talent" and "intelligent") is an object-oriented operating system and the company producing it. Started as the Pink project within Apple to provide a replacement for the classic Mac OS, it was later spun off into a joint venture with IBM as part of the AIM alliance, with the purpose of building a competing platform to Microsoft Cairo and NeXTSTEP. The development process never worked, and has been cited as an example of a project death march. Apple pulled out of the project in 1995 before the code had been delivered. Copland Copland was a project at Apple to create an updated version of the classic Mac OS. It was to have introduced protected memory, preemptive multitasking, and new underlying operating system features, yet still be compatible with existing Mac software. They originally planned the follow-up release Gershwin to add multithreading and other advanced features. New features were added more rapidly than they could be completed, and the completion date slipped into the future with no sign of a release. In 1996, Apple canceled the project outright and sought a suitable third-party replacement. Copland development ended in August 1996, and in December 1996, Apple announced that it was buying NeXT for its NeXTSTEP operating system. Timeline
Technology
Operating Systems
null
21347678
https://en.wikipedia.org/wiki/Unit%20of%20measurement
Unit of measurement
A unit of measurement, or unit of measure, is a definite magnitude of a quantity, defined and adopted by convention or by law, that is used as a standard for measurement of the same kind of quantity. Any other quantity of that kind can be expressed as a multiple of the unit of measurement. For example, a length is a physical quantity. The metre (symbol m) is a unit of length that represents a definite predetermined length. For instance, when referencing "10 metres" (or 10 m), what is actually meant is 10 times the definite predetermined length called "metre". The definition, agreement, and practical use of units of measurement have played a crucial role in human endeavour from early ages up to the present. A multitude of systems of units used to be very common. Now there is a global standard, the International System of Units (SI), the modern form of the metric system. In trade, weights and measures are often a subject of governmental regulation, to ensure fairness and transparency. The International Bureau of Weights and Measures (BIPM) is tasked with ensuring worldwide uniformity of measurements and their traceability to the International System of Units (SI). Metrology is the science of developing nationally and internationally accepted units of measurement. In physics and metrology, units are standards for measurement of physical quantities that need clear definitions to be useful. Reproducibility of experimental results is central to the scientific method. A standard system of units facilitates this. Scientific systems of units are a refinement of the concept of weights and measures historically developed for commercial purposes. Science, medicine, and engineering often use larger and smaller units of measurement than those used in everyday life. The judicious selection of the units of measurement can aid researchers in problem solving (see, for example, dimensional analysis). History A unit of measurement is a standardized quantity of a physical property, used as a factor to express occurring quantities of that property. Units of measurement were among the earliest tools invented by humans. Primitive societies needed rudimentary measures for many tasks: constructing dwellings of an appropriate size and shape, fashioning clothing, or bartering food or raw materials. The earliest known uniform systems of measurement seem to have all been created sometime in the 4th and 3rd millennia BC among the ancient peoples of Mesopotamia, Egypt and the Indus Valley, and perhaps also Elam in Persia as well. Weights and measures are mentioned in the Bible (Leviticus 19:35–36). It is a commandment to be honest and have fair measures. In the Magna Carta of 1215 (The Great Charter) with the seal of King John, put before him by the Barons of England, King John agreed in Clause 35 "There shall be one measure of wine throughout our whole realm, and one measure of ale and one measure of corn—namely, the London quart;—and one width of dyed and russet and hauberk cloths—namely, two ells below the selvage..." As of the 21st century, the International System is predominantly used in the world. There exist other unit systems which are used in many places such as the United States Customary System and the Imperial System. The United States is the only industrialized country that has not yet at least mostly converted to the metric system. The systematic effort to develop a universally acceptable system of units dates back to 1790 when the French National Assembly charged the French Academy of Sciences to come up such a unit system. This system was the precursor to the metric system which was quickly developed in France but did not take on universal acceptance until 1875 when The Metric Convention Treaty was signed by 17 nations. After this treaty was signed, a General Conference of Weights and Measures (CGPM) was established. The CGPM produced the current SI, which was adopted in 1954 at the 10th Conference of Weights and Measures. Currently, the United States is a dual-system society which uses both the SI and the US Customary system. Systems of units The use of a single unit of measurement for some quantity has obvious drawbacks. For example, it is impractical to use the same unit for the distance between two cities and the length of a needle. Thus, historically they would develop independently. One way to make large numbers or small fractions easier to read, is to use unit prefixes. At some point in time though, the need to relate the two units might arise, and consequently the need to choose one unit as defining the other or vice versa. For example, an inch could be defined in terms of a barleycorn. A system of measurement is a collection of units of measurement and rules relating them to each other. As science progressed, a need arose to relate the measurement systems of different quantities, like length and weight and volume. The effort of attempting to relate different traditional systems between each other exposed many inconsistencies, and brought about the development of new units and systems. Systems of units vary from country to country. Some of the different systems include the centimetre–gram–second, foot–pound–second, metre–kilogram–second systems, and the International System of Units, SI. Among the different systems of units used in the world, the most widely used and internationally accepted one is SI. The base SI units are the second, metre, kilogram, ampere, kelvin, mole and candela; all other SI units are derived from these base units. Systems of measurement in modern use include the metric system, the imperial system, and United States customary units. Traditional systems Historically many of the systems of measurement which had been in use were to some extent based on the dimensions of the human body. Such units, which may be called anthropic units, include the cubit, based on the length of the forearm; the pace, based on the length of a stride; and the foot and hand. As a result, units of measure could vary not only from location to location but from person to person. Units not based on the human body could be based on agriculture, as is the case with the furlong and the acre, both based on the amount of land able to be worked by a team of oxen. Metric systems Metric systems of units have evolved since the adoption of the original metric system in France in 1791. The current international standard metric system is the International System of Units (abbreviated to SI). An important feature of modern systems is standardization. Each unit has a universally recognized size. Both the imperial units and US customary units derive from earlier English units. Imperial units were mostly used in the British Commonwealth and the former British Empire. US customary units are still the main system of measurement used in the United States outside of science, medicine, many sectors of industry, and some of government and military, and despite Congress having legally authorised metric measure on 28 July 1866. Some steps towards US metrication have been made, particularly the redefinition of basic US and imperial units to derive exactly from SI units. Since the international yard and pound agreement of 1959 the US and imperial inch is now defined as exactly , and the US and imperial avoirdupois pound is now defined as exactly . Natural systems While the above systems of units are based on arbitrary unit values, formalised as standards, natural units in physics are based on physical principle or are selected to make physical equations easier to work with. For example, atomic units (au) were designed to simplify the wave equation in atomic physics. Some unusual and non-standard units may be encountered in sciences. These may include the solar mass (), the megaton (the energy released by detonating one million tons of trinitrotoluene, TNT) and the electronvolt. Legal control of weights and measures To reduce the incidence of retail fraud, many national statutes have standard definitions of weights and measures that may be used (hence "statute measure"), and these are verified by legal officers. Informal comparison to familiar concepts In informal settings, a quantity may be described as multiples of that of a familiar entity, which can be easier to contextualize than a value in a formal unit system. For instance, a publication may describe an area in a foreign country as a number of multiples of the area of a region local to the readership. The propensity for certain concepts to be used frequently can give rise to loosely defined "systems" of units. Base and derived units For most quantities a unit is necessary to communicate values of that physical quantity. For example, conveying to someone a particular length without using some sort of unit is impossible, because a length cannot be described without a reference used to make sense of the value given. But not all quantities require a unit of their own. Using physical laws, units of quantities can be expressed as combinations of units of other quantities. Thus only a small set of units is required. These units are taken as the base units and the other units are derived units. Thus base units are the units of the quantities which are independent of other quantities and they are the units of length, mass, time, electric current, temperature, luminous intensity and the amount of substance. Derived units are the units of the quantities which are derived from the base quantities and some of the derived units are the units of speed, work, acceleration, energy, pressure etc. Different systems of units are based on different choices of a set of related units including fundamental and derived units. Physical quantity components Dimensional homogeneity Units can only be added or subtracted if they are the same type; however units can always be multiplied or divided, as George Gamow used to explain. Let be "2 metres" and "3 seconds", then . There are certain rules that apply to units: Only like terms may be added. When a unit is divided by itself, the division yields a unitless one. When two different units are multiplied or divided, the result is a new unit, referred to by the combination of the units. For instance, in SI, the unit of speed is metre per second (m/s). See dimensional analysis. A unit can be multiplied by itself, creating a unit with an exponent (e.g. m2/s2). Put simply, units obey the laws of indices. (See Exponentiation.) Some units have special names, however these should be treated like their equivalents. For example, one newton (N) is equivalent to 1 kg⋅m/s2. Thus a quantity may have several unit designations, for example: the unit for surface tension can be referred to as either N/m (newton per metre) or kg/s2 (kilogram per second squared). Converting units of measurement Real-world implications One example of the importance of agreed units is the failure of the NASA Mars Climate Orbiter, which was accidentally destroyed on a mission to Mars in September 1999 (instead of entering orbit) due to miscommunications about the value of forces: different computer programs used different units of measurement (newton versus pound force). Considerable amounts of effort, time, and money were wasted. On 15 April 1999, Korean Air cargo flight 6316 from Shanghai to Seoul was lost due to the crew confusing tower instructions (in metres) and altimeter readings (in feet). Three crew and five people on the ground were killed. Thirty-seven were injured. In 1983, a Boeing 767 (which thanks to its pilot's gliding skills landed safely and became known as the Gimli Glider) ran out of fuel in mid-flight because of two mistakes in figuring the fuel supply of Air Canada's first aircraft to use metric measurements. This accident was the result of both confusion due to the simultaneous use of metric and Imperial measures and confusion of mass and volume measures. When planning his journey across the Atlantic Ocean in the 1480s, Columbus mistakenly assumed that the mile referred to in the Arabic estimate of miles for the size of a degree was the same as the actually much shorter Italian mile of 1,480 metres. His estimate for the size of the degree and for the circumference of the Earth was therefore about 25% too small.
Physical sciences
Measurement: General
null
21347693
https://en.wikipedia.org/wiki/Watt
Watt
The watt (symbol: W) is the unit of power or radiant flux in the International System of Units (SI), equal to 1 joule per second or 1 kg⋅m2⋅s−3. It is used to quantify the rate of energy transfer. The watt is named in honor of James Watt (1736–1819), an 18th-century Scottish inventor, mechanical engineer, and chemist who improved the Newcomen engine with his own steam engine in 1776. Watt's invention was fundamental for the Industrial Revolution. Overview When an object's velocity is held constant at one meter per second against a constant opposing force of one newton, the rate at which work is done is one watt. In terms of electromagnetism, one watt is the rate at which electrical work is performed when a current of one ampere (A) flows across an electrical potential difference of one volt (V), meaning the watt is equivalent to the volt-ampere (the latter unit, however, is used for a different quantity from the real power of an electrical circuit). Two additional unit conversions for watt can be found using the above equation and Ohm's law. where ohm () is the SI derived unit of electrical resistance. Examples A person having a mass of 100 kg who climbs a 3-meter-high ladder in 5 seconds is doing work at a rate of about 600 watts. Mass times acceleration due to gravity times height divided by the time it takes to lift the object to the given height gives the rate of doing work or power. A laborer over the course of an eight-hour day can sustain an average output of about 75 watts; higher power levels can be achieved for short intervals and by athletes. Origin and adoption as an SI unit The watt is named after the Scottish inventor James Watt. The unit name was proposed by C. William Siemens in August 1882 in his President's Address to the Fifty-Second Congress of the British Association for the Advancement of Science. Noting that units in the practical system of units were named after leading physicists, Siemens proposed that watt might be an appropriate name for a unit of power. Siemens defined the unit within the existing system of practical units as "the power conveyed by a current of an Ampère through the difference of potential of a Volt". In October 1908, at the International Conference on Electric Units and Standards in London, so-called international definitions were established for practical electrical units. Siemens' definition was adopted as the international watt. (Also used: 1 Ω.) The watt was defined as equal to 107 units of power in the practical system of units. The "international units" were dominant from 1909 until 1948. After the 9th General Conference on Weights and Measures in 1948, the international watt was redefined from practical units to absolute units (i.e., using only length, mass, and time). Concretely, this meant that 1 watt was defined as the quantity of energy transferred in a unit of time, namely 1 J/s. In this new definition, 1 absolute watt = 1.00019 international watts. Texts written before 1948 are likely to be using the international watt, which implies caution when comparing numerical values from this period with the post-1948 watt. In 1960, the 11th General Conference on Weights and Measures adopted the absolute watt into the International System of Units (SI) as the unit of power. Multiples Attowatt The sound intensity in water corresponding to the international standard reference sound pressure of 1 μPa is approximately 0.65 aW/m2. Femtowatt Powers measured in femtowatts are typically found in references to radio and radar receivers. For example, meaningful FM tuner performance figures for sensitivity, quieting and signal-to-noise require that the RF energy applied to the antenna input be specified. These input levels are often stated in dBf (decibels referenced to 1 femtowatt). This is 0.2739 microvolts across a 75-ohm load or 0.5477 microvolt across a 300-ohm load; the specification takes into account the RF input impedance of the tuner. Picowatt Powers measured in picowatts are typically used in reference to radio and radar receivers, acoustics and in the science of radio astronomy. One picowatt is the international standard reference value of sound power when this quantity is expressed in decibels. Nanowatt Powers measured in nanowatts are also typically used in reference to radio and radar receivers. Microwatt Powers measured in microwatts are typically stated in medical instrumentation systems such as the electroencephalograph (EEG) and the electrocardiograph (ECG), in a wide variety of scientific and engineering instruments, and in reference to radio and radar receivers. Compact solar cells for devices such as calculators and watches are typically measured in microwatts. Milliwatt A typical laser pointer outputs about five milliwatts of light power, whereas a typical hearing aid uses less than one milliwatt. Audio signals and other electronic signal levels are often measured in dBm, referenced to one milliwatt. Kilowatt The kilowatt is typically used to express the output power of engines and the power of electric motors, tools, machines, and heaters. It is also a common unit used to express the electromagnetic power output of broadcast radio and television transmitters. One kilowatt is approximately equal to 1.34 horsepower. A small electric heater with one heating element can use 1 kilowatt. The average electric power consumption of a household in the United States is about 1 kilowatt. A surface area of 1 square meter on Earth receives typically about one kilowatt of sunlight from the Sun (the solar irradiance) (on a clear day at midday, close to the equator). Megawatt Many events or machines produce or sustain the conversion of energy on this scale, including large electric motors; large warships such as aircraft carriers, cruisers, and submarines; large server farms or data centers; and some scientific research equipment, such as supercolliders, and the output pulses of very large lasers. A large residential or commercial building may use several megawatts in electric power and heat. On railways, modern high-powered electric locomotives typically have a peak power output of , while some produce much more. The Eurostar e300, for example, uses more than , while heavy diesel-electric locomotives typically produce and use . U.S. nuclear power plants have net summer capacities between about . The earliest citing of the megawatt in the Oxford English Dictionary (OED) is a reference in the 1900 Webster's International Dictionary of the English Language. The OED also states that megawatt appeared in a November 28, 1947, article in the journal Science (506:2). Gigawatt A gigawatt is typical average power for an industrial city of one million habitants, and is the output of a large power station. The GW unit is thus used for large power plants and power grids. For example, by the end of 2010, power shortages in China's Shanxi province were expected to increase to 5–6 GW and the installation capacity of wind power in Germany was 25.8 GW. The largest unit (out of four) of the Belgian Doel Nuclear Power Station has a peak output of 1.04 GW. HVDC converters have been built with power ratings of up to 2 GW. Terawatt The primary energy used by humans worldwide was about 160,000 terawatt-hours in 2019, corresponding to an average continuous power consumption of 18 TW that year. Earth itself emits 47±2 TW, far less than the energy received from solar radiation. The most powerful lasers from the mid-1960s to the mid-1990s produced power in terawatts, but only for nanosecond intervals. The average lightning strike peaks at 1 TW, but these strikes only last for 30 microseconds. Petawatt A petawatt can be produced by the current generation of lasers for time scales on the order of picoseconds. One such laser is Lawrence Livermore's Nova laser, which achieved a power output of 1.25 PW by a process called chirped pulse amplification. The duration of the pulse was roughly 0.5 ps, giving a total energy of 600 J. Another example is the Laser for Fast Ignition Experiments (LFEX) at the Institute of Laser Engineering (ILE), Osaka University, which achieved a power output of 2 PW for a duration of approximately 1 ps. Based on the average total solar irradiance of 1.361 kW/m2, the total power of sunlight striking Earth's atmosphere is estimated at 174 PW. The planet's average rate of global warming, measured as Earth's energy imbalance, reached about 0.5 PW (0.3% of incident solar power) by 2019. Yottawatt The power output of the Sun is 382.8 YW, about 2 billion times the power estimated to reach Earth's atmosphere. Conventions in the electric power industry In the electric power industry, megawatt electrical (MWe or MWe) refers by convention to the electric power produced by a generator, while megawatt thermal or thermal megawatt (MWt, MWt, or MWth, MWth) refers to thermal power produced by the plant. For example, the Embalse nuclear power plant in Argentina uses a fission reactor to generate 2,109 MWt (i.e. heat), which creates steam to drive a turbine, which generates 648 MWe (i.e. electricity). Other SI prefixes are sometimes used, for example gigawatt electrical (GWe). The International Bureau of Weights and Measures, which maintains the SI-standard, states that further information about a quantity should not be attached to the unit symbol but instead to the quantity symbol (e.g., rather than ) and so these unit symbols are non-SI. In compliance with SI, the energy company Ørsted A/S uses the unit megawatt for produced electrical power and the equivalent unit megajoule per second for delivered heating power in a combined heat and power station such as Avedøre Power Station. When describing alternating current (AC) electricity, another distinction is made between the watt and the volt-ampere. While these units are equivalent for simple resistive circuits, they differ when loads exhibit electrical reactance. Radio transmission Radio stations usually report the power of their transmitters in units of watts, referring to the effective radiated power. This refers to the power that a half-wave dipole antenna would need to radiate to match the intensity of the transmitter's main lobe. Distinction between watts and watt-hours The terms power and energy are closely related but distinct physical quantities. Power is the rate at which energy is generated or consumed and hence is measured in units (e.g. watts) that represent energy . For example, when a light bulb with a power rating of is turned on for one hour, the energy used is 100 watt hours (W·h), 0.1 kilowatt hour, or 360 kJ. This same amount of energy would light a 40-watt bulb for 2.5 hours, or a 50-watt bulb for 2 hours. Power stations are rated using units of power, typically megawatts or gigawatts (for example, the Three Gorges Dam in China is rated at approximately 22 gigawatts). This reflects the maximum power output it can achieve at any point in time. A power station's annual energy output, however, would be recorded using units of energy (not power), typically gigawatt hours. Major energy production or consumption is often expressed as terawatt hours for a given period; often a calendar year or financial year. One terawatt hour of energy is equal to a sustained power delivery of one terawatt for one hour, or approximately 114 megawatts for a period of one year: Power output = energy / time 1 terawatt hour per year = / (365 days × 24 hours per day) ≈ 114 million watts, equivalent to approximately 114 megawatts of constant power output. The watt-second is a unit of energy, equal to the joule. One kilowatt hour is 3,600,000 watt seconds. While a watt per hour is a unit of rate of change of power with time, it is not correct to refer to a watt (or watt-hour) as a watt per hour.
Physical sciences
Energy, power, force and pressure
null
21349167
https://en.wikipedia.org/wiki/Palaemon%20serratus
Palaemon serratus
Palaemon serratus, also called the common prawn, is a species of shrimp found in the Atlantic Ocean from Denmark to Mauritania, and in the Mediterranean Sea and Black Sea. Ecology Individuals live for 3–5 years in groups in rocky crevices at depths of up to . Females grow faster than males, and the population is highly seasonal, with a pronounced peak in the autumn. They are preyed upon by a variety of fish, including species of Mullidae, Moronidae, Sparidae and Batrachoididae. P. serratus can sometimes be found with a prominent bulge in its carapace over its gills. This is caused by the presence of an isopod parasite, such as Bopyrus squillarum. Description Palaemon serratus may be distinguished from other species of shrimp by the rostrum, which curves upwards, is bifurcated at the tip and has 6–7 teeth along its upper edge, and 4–5 teeth on the lower edge. Other species may have a slightly curved rostrum, but then the teeth on its dorsal surface continue into the distal third, which is untoothed in P. serratus. P. serratus is pinkish brown, with reddish patterns, and is typically long, making it the largest of the native shrimp and prawns around the British Isles. Palaemon serratus is one of the few invertebrates to have its hearing studied in detail; it is sensitive to frequencies between 100 Hz and 3 kHz, with an acuity similar to that of generalist fish. While the hearing range of a P. serratus individual changes as it grows, all are capable of hearing tones at 500 Hz. Fisheries A small commercial fishery exists for P. serratus on the west coast of Great Britain, chiefly in West Wales (Cemaes Head to the Llŷn Peninsula), but extending increasingly far north to include parts of Scotland. In Ireland, fishing for P. serratus began at Baltimore, County Cork in the 1970s and subsequently expanded. A peak landing of 548 t was recorded in 1999, and four counties account for over 90% of the catch — Galway, Kerry, Cork and Waterford. There is now concern that the current levels of exploitation may represent overfishing, and measures are being considered to limit the catch, such as a minimum landing size.
Biology and health sciences
Shrimps and prawns
Animals
21350772
https://en.wikipedia.org/wiki/Greenhouse%20gas
Greenhouse gas
Greenhouse gases (GHGs) are the gases in the atmosphere that raise the surface temperature of planets such as the Earth. What distinguishes them from other gases is that they absorb the wavelengths of radiation that a planet emits, resulting in the greenhouse effect. The Earth is warmed by sunlight, causing its surface to radiate heat, which is then mostly absorbed by greenhouse gases. Without greenhouse gases in the atmosphere, the average temperature of Earth's surface would be about , rather than the present average of . The five most abundant greenhouse gases in Earth's atmosphere, listed in decreasing order of average global mole fraction, are: water vapor, carbon dioxide, methane, nitrous oxide, ozone. Other greenhouse gases of concern include chlorofluorocarbons (CFCs and HCFCs), hydrofluorocarbons (HFCs), perfluorocarbons, , and . Water vapor causes about half of the greenhouse effect, acting in response to other gases as a climate change feedback. Human activities since the beginning of the Industrial Revolution (around 1750) have increased carbon dioxide by over 50%, and methane levels by 150%. Carbon dioxide emissions are causing about three-quarters of global warming, while methane emissions cause most of the rest. The vast majority of carbon dioxide emissions by humans come from the burning of fossil fuels, with remaining contributions from agriculture and industry. Methane emissions originate from agriculture, fossil fuel production, waste, and other sources. The carbon cycle takes thousands of years to fully absorb from the atmosphere, while methane lasts in the atmosphere for an average of only 12 years. Natural flows of carbon happen between the atmosphere, terrestrial ecosystems, the ocean, and sediments. These flows have been fairly balanced over the past one million years, although greenhouse gas levels have varied widely in the more distant past. Carbon dioxide levels are now higher than they have been for three million years. If current emission rates continue then global warming will surpass sometime between 2040 and 2070. This is a level which the Intergovernmental Panel on Climate Change (IPCC) says is "dangerous". Properties and mechanisms Greenhouse gases are infrared active, meaning that they absorb and emit infrared radiation in the same long wavelength range as what is emitted by the Earth's surface, clouds and atmosphere. 99% of the Earth's dry atmosphere (excluding water vapor) is made up of nitrogen () (78%) and oxygen () (21%). Because their molecules contain two atoms of the same element, they have no asymmetry in the distribution of their electrical charges, and so are almost totally unaffected by infrared thermal radiation, with only an extremely minor effect from collision-induced absorption. A further 0.9% of the atmosphere is made up by argon (Ar), which is monatomic, and so completely transparent to thermal radiation. On the other hand, carbon dioxide (0.04%), methane, nitrous oxide and even less abundant trace gases account for less than 0.1% of Earth's atmosphere, but because their molecules contain atoms of different elements, there is an asymmetry in electric charge distribution which allows molecular vibrations to interact with electromagnetic radiation. This makes them infrared active, and so their presence causes greenhouse effect. Radiative forcing Earth absorbs some of the radiant energy received from the sun, reflects some of it as light and reflects or radiates the rest back to space as heat. A planet's surface temperature depends on this balance between incoming and outgoing energy. When Earth's energy balance is shifted, its surface becomes warmer or cooler, leading to a variety of changes in global climate. Radiative forcing is a metric calculated in watts per square meter, which characterizes the impact of an external change in a factor that influences climate. It is calculated as the difference in top-of-atmosphere (TOA) energy balance immediately caused by such an external change. A positive forcing, such as from increased concentrations of greenhouse gases, means more energy arriving than leaving at the top-of-atmosphere, which causes additional warming, while negative forcing, like from sulfates forming in the atmosphere from sulfur dioxide, leads to cooling. Within the lower atmosphere, greenhouse gases exchange thermal radiation with the surface and limit radiative heat flow away from it, which reduces the overall rate of upward radiative heat transfer. The increased concentration of greenhouse gases is also cooling the upper atmosphere, as it is much thinner than the lower layers, and any heat re-emitted from greenhouse gases is more likely to travel further to space than to interact with the fewer gas molecules in the upper layers. The upper atmosphere is also shrinking as the result. Contributions of specific gases to the greenhouse effect Anthropogenic changes to the natural greenhouse effect are sometimes referred to as the enhanced greenhouse effect. This table shows the most important contributions to the overall greenhouse effect, without which the average temperature of Earth's surface would be about , instead of around . This table also specifies tropospheric ozone, because this gas has a cooling effect in the stratosphere, but a warming influence comparable to nitrous oxide and CFCs in the troposphere. Special role of water vapor Water vapor is the most important greenhouse gas overall, being responsible for 41–67% of the greenhouse effect, but its global concentrations are not directly affected by human activity. While local water vapor concentrations can be affected by developments such as irrigation, it has little impact on the global scale due to its short residence time of about nine days. Indirectly, an increase in global temperatures will also increase water vapor concentrations and thus their warming effect, in a process known as water vapor feedback. It occurs because the Clausius–Clapeyron relation holds that more water vapor will be present per unit volume at elevated temperatures. Thus, local atmospheric concentration of water vapor varies from less than 0.01% in extremely cold regions up to 3% by mass in saturated air at about 32 °C. Global warming potential (GWP) and equivalents List of all greenhouse gases The contribution of each gas to the enhanced greenhouse effect is determined by the characteristics of that gas, its abundance, and any indirect effects it may cause. For example, the direct radiative effect of a mass of methane is about 84 times stronger than the same mass of carbon dioxide over a 20-year time frame. Since the 1980s, greenhouse gas forcing contributions (relative to year 1750) are also estimated with high accuracy using IPCC-recommended expressions derived from radiative transfer models. The concentration of a greenhouse gas is typically measured in parts per million (ppm) or parts per billion (ppb) by volume. A concentration of 420 ppm means that 420 out of every million air molecules is a molecule. The first 30 ppm increase in concentrations took place in about 200 years, from the start of the Industrial Revolution to 1958; however the next 90 ppm increase took place within 56 years, from 1958 to 2014. Similarly, the average annual increase in the 1960s was only 37% of what it was in 2000 through 2007. Many observations are available online in a variety of Atmospheric Chemistry Observational Databases. The table below shows the most influential long-lived, well-mixed greenhouse gases, along with their tropospheric concentrations and direct radiative forcings, as identified by the Intergovernmental Panel on Climate Change (IPCC). Abundances of these trace gases are regularly measured by atmospheric scientists from samples collected throughout the world. It excludes water vapor because changes in its concentrations are calculated as a climate change feedback indirectly caused by changes in other greenhouse gases, as well as ozone, whose concentrations are only modified indirectly by various refrigerants that cause ozone depletion. Some short-lived gases (e.g. carbon monoxide, NOx) and aerosols (e.g. mineral dust or black carbon) are also excluded because of limited role and strong variation, along with minor refrigerants and other halogenated gases, which have been mass-produced in smaller quantities than those in the table. and Annex III of the 2021 IPCC WG1 Report a Mole fractions: μmol/mol = ppm = parts per million (106); nmol/mol = ppb = parts per billion (109); pmol/mol = ppt = parts per trillion (1012). The IPCC states that "no single atmospheric lifetime can be given" for CO2. This is mostly due to the rapid growth and cumulative magnitude of the disturbances to Earth's carbon cycle by the geologic extraction and burning of fossil carbon. As of year 2014, fossil CO2 emitted as a theoretical 10 to 100 GtC pulse on top of the existing atmospheric concentration was expected to be 50% removed by land vegetation and ocean sinks in less than about a century, as based on the projections of coupled models referenced in the AR5 assessment. A substantial fraction (20–35%) was also projected to remain in the atmosphere for centuries to millennia, where fractional persistence increases with pulse size. Values are relative to year 1750. AR6 reports the effective radiative forcing which includes effects of rapid adjustments in the atmosphere and at the surface. Factors affecting concentrations Atmospheric concentrations are determined by the balance between sources (emissions of the gas from human activities and natural systems) and sinks (the removal of the gas from the atmosphere by conversion to a different chemical compound or absorption by bodies of water). Airborne fraction The proportion of an emission remaining in the atmosphere after a specified time is the "airborne fraction" (AF). The annual airborne fraction is the ratio of the atmospheric increase in a given year to that year's total emissions. The annual airborne fraction for had been stable at 0.45 for the past six decades even as the emissions have been increasing. This means that the other 0.55 of emitted is absorbed by the land and atmosphere carbon sinks within the first year of an emission. In the high-emission scenarios, the effectiveness of carbon sinks will be lower, increasing the atmospheric fraction of even though the raw amount of emissions absorbed will be higher than in the present. Atmospheric lifetime Major greenhouse gases are well mixed and take many years to leave the atmosphere. The atmospheric lifetime of a greenhouse gas refers to the time required to restore equilibrium following a sudden increase or decrease in its concentration in the atmosphere. Individual atoms or molecules may be lost or deposited to sinks such as the soil, the oceans and other waters, or vegetation and other biological systems, reducing the excess to background concentrations. The average time taken to achieve this is the mean lifetime. This can be represented through the following formula, where the lifetime of an atmospheric species X in a one-box model is the average time that a molecule of X remains in the box. can also be defined as the ratio of the mass (in kg) of X in the box to its removal rate, which is the sum of the flow of X out of the box (), chemical loss of X (), and deposition of X () (all in kg/s): . If input of this gas into the box ceased, then after time , its concentration would decrease by about 63%. Changes to any of these variables can alter the atmospheric lifetime of a greenhouse gas. For instance, methane's atmospheric lifetime is estimated to have been lower in the 19th century than now, but to have been higher in the second half of the 20th century than after 2000. Carbon dioxide has an even more variable lifetime, which cannot be specified down to a single number. Scientists instead say that while the first 10% of carbon dioxide's airborne fraction (not counting the ~50% absorbed by land and ocean sinks within the emission's first year) is removed "quickly", the vast majority of the airborne fraction – 80% – lasts for "centuries to millennia". The remaining 10% stays for tens of thousands of years. In some models, this longest-lasting fraction is as large as 30%. During geologic time scales Monitoring Greenhouse gas monitoring involves the direct measurement of atmospheric concentrations and direct and indirect measurement of greenhouse gas emissions. Indirect methods calculate emissions of greenhouse gases based on related metrics such as fossil fuel extraction. There are several different methods of measuring carbon dioxide concentrations in the atmosphere, including infrared analyzing and manometry. Methane and nitrous oxide are measured by other instruments, such as the range-resolved infrared differential absorption lidar (DIAL). Greenhouse gases are measured from space such as by the Orbiting Carbon Observatory and through networks of ground stations such as the Integrated Carbon Observation System. The Annual Greenhouse Gas Index (AGGI) is defined by atmospheric scientists at NOAA as the ratio of total direct radiative forcing due to long-lived and well-mixed greenhouse gases for any year for which adequate global measurements exist, to that present in year 1990. These radiative forcing levels are relative to those present in year 1750 (i.e. prior to the start of the industrial era). 1990 is chosen because it is the baseline year for the Kyoto Protocol, and is the publication year of the first IPCC Scientific Assessment of Climate Change. As such, NOAA states that the AGGI "measures the commitment that (global) society has already made to living in a changing climate. It is based on the highest quality atmospheric observations from sites around the world. Its uncertainty is very low." Data networks Types of sources Natural sources The natural flows of carbon between the atmosphere, ocean, terrestrial ecosystems, and sediments are fairly balanced; so carbon levels would be roughly stable without human influence. Carbon dioxide is removed from the atmosphere primarily through photosynthesis and enters the terrestrial and oceanic biospheres. Carbon dioxide also dissolves directly from the atmosphere into bodies of water (ocean, lakes, etc.), as well as dissolving in precipitation as raindrops fall through the atmosphere. When dissolved in water, carbon dioxide reacts with water molecules and forms carbonic acid, which contributes to ocean acidity. It can then be absorbed by rocks through weathering. It also can acidify other surfaces it touches or be washed into the ocean. Human-made sources The vast majority of carbon dioxide emissions by humans come from the burning of fossil fuels. Additional contributions come from cement manufacturing, fertilizer production, and changes in land use like deforestation. Methane emissions originate from agriculture, fossil fuel production, waste, and other sources. If current emission rates continue then temperature rises will surpass sometime between 2040 and 2070, which is the level the United Nations' Intergovernmental Panel on Climate Change (IPCC) says is "dangerous". Most greenhouse gases have both natural and human-caused sources. An exception are purely human-produced synthetic halocarbons which have no natural sources. During the pre-industrial Holocene, concentrations of existing gases were roughly constant, because the large natural sources and sinks roughly balanced. In the industrial era, human activities have added greenhouse gases to the atmosphere, mainly through the burning of fossil fuels and clearing of forests. Reducing human-caused greenhouse gases Needed emissions cuts Removal from the atmosphere through negative emissions Several technologies remove greenhouse gas emissions from the atmosphere. Most widely analyzed are those that remove carbon dioxide from the atmosphere, either to geologic formations such as bio-energy with carbon capture and storage and carbon dioxide air capture, or to the soil as in the case with biochar. Many long-term climate scenario models require large-scale human-made negative emissions to avoid serious climate change. Negative emissions approaches are also being studied for atmospheric methane, called atmospheric methane removal. History of discovery In the late 19th century, scientists experimentally discovered that and do not absorb infrared radiation (called, at that time, "dark radiation"), while water (both as true vapor and condensed in the form of microscopic droplets suspended in clouds) and and other poly-atomic gaseous molecules do absorb infrared radiation. In the early 20th century, researchers realized that greenhouse gases in the atmosphere made Earth's overall temperature higher than it would be without them. The term greenhouse was first applied to this phenomenon by Nils Gustaf Ekholm in 1901. During the late 20th century, a scientific consensus evolved that increasing concentrations of greenhouse gases in the atmosphere cause a substantial rise in global temperatures and changes to other parts of the climate system, with consequences for the environment and for human health. Other planets Greenhouse gases exist in many atmospheres, creating greenhouse effects on Mars, Titan, and particularly in the thick atmosphere of Venus. While Venus has been described as the ultimate end state of runaway greenhouse effect, such a process would have virtually no chance of occurring from any increases in greenhouse gas concentrations caused by humans, as the Sun's brightness is too low and it would likely need to increase by some tens of percents, which will take a few billion years.
Physical sciences
Climate change
Earth science
2026564
https://en.wikipedia.org/wiki/Tissue%20paper
Tissue paper
Tissue paper, or simply tissue, is a lightweight paper or light crêpe paper. Tissue can be made from recycled paper pulp on a paper machine. Tissue paper is very versatile, and different kinds are made to best serve these purposes, which are hygienic tissue paper, facial tissues, paper towels, as packing material, among other (sometimes creative) uses. The use of tissue paper is common in developed nations, around 21 million tonnes in North America and 6 million in Europe, and is growing due to urbanization. As a result, the industry has often been scrutinized for deforestation. However, more companies are presently using more recycled fibres in tissue paper. Properties The key properties of tissues are absorbency, basis weight, thickness, bulk (specific volume), brightness, stretch, appearance and comfort. Production Tissue paper is produced on a paper machine that has a single large steam heated drying cylinder (Yankee dryer) fitted with a hot air hood. The raw material is paper pulp. The Yankee cylinder is sprayed with adhesives to make the paper stick. Creping is done by the Yankee's doctor blade that is scraping the dry paper off the cylinder surface. The crinkle (crêping) is controlled by the strength of the adhesive, geometry of the doctor blade, speed difference between the Yankee and final section of the paper machine and paper pulp characteristics. The highest water absorbing applications are produced with a through air drying (TAD) process. These papers contain high amounts of NBSK and CTMP. This gives a bulky paper with high wet tensile strength and good water holding capacity. The TAD process uses about twice the energy compared with conventional drying of paper. The properties are controlled by pulp quality, crêping and additives (both in base paper and as coating). The wet strength is often an important parameter for tissue. Applications Hygienic tissue paper Hygienic tissue paper is commonly for personal use as facial tissue (paper handkerchiefs), napkins, bathroom tissue and household towels. Paper has been used for hygiene purposes for centuries, but tissue paper as we know it today was not produced in the United States before the mid-1940s. In Western Europe large scale industrial production started in the beginning of the 1960s. Facial tissues Facial tissue (paper handkerchiefs) refers to a class of soft, absorbent, disposable paper that is suitable for use on the face. The term is commonly used to refer to the type of facial tissue, usually sold in boxes, that is designed to facilitate the expulsion of nasal mucus although it may refer to other types of facial tissues including napkins and wipes. The first tissue handkerchiefs were introduced in the 1920s. They have been refined over the years, especially for softness and strength, but their basic design has remained constant. Today each person in Western Europe uses about 200 tissue handkerchiefs a year, with a variety of 'alternative' functions including the treatment of minor wounds, the cleaning of face and hands and the cleaning of spectacles. The importance of the paper tissue on minimising the spread of an infection has been highlighted in light of fears over a swine flu epidemic. In the UK, for example, the Government ran a campaign called "Catch it, Bin it, Kill it", which encouraged people to cover their mouth with a paper tissue when coughing or sneezing. Pressure on use of tissue papers has grown in the wake of improved hygiene concerns in response to the coronavirus pandemic. Paper towels Paper towels are the second largest application for tissue paper in the consumer sector. This type of paper has usually a basis weight of 20 to 24 g/m2. Normally such paper towels are two-ply. This kind of tissue can be made from 100% chemical pulp to 100% recycled fibre or a combination of the two. Normally, some long fibre chemical pulp is included to improve strength. Wrapping tissue Wrapping tissue is a type of thin, translucent tissue paper used for wrapping/packing various articles and cushioning fragile items. Custom-printed wrapping tissue is becoming a popular trend for boutique retail businesses. There are various on-demand custom printed wrapping tissue paper available online. Sustainably printed custom tissue wrapping paper are printed on FSC-certified, acid-free paper; and only use soy-based inks. Toilet paper Rolls of toilet paper have been available since the end of the 19th century. Today, more than 20 billion rolls of toilet tissue are used each year in Western Europe. Toilet paper brands include, Andrex (United Kingdom), Charmin (United States) and Quilton (Australia), among many others. Table napkins Table napkins can be made of tissue paper. These are made from one up to four plies and in a variety of qualities, sizes, folds, colours and patterns depending on intended use and prevailing fashions. The composition of raw materials varies a lot from deinked to chemical pulp depending on quality. Acoustic disrupter In the late 1970s and early 1980s, a sound recording engineer named Bob Clearmountain was said to have hung tissue paper over the tweeter of his pair of Yamaha NS-10 speakers to tame the over-bright treble coming from it. The phenomenon became the subject of hot debate and an investigation into the sonic effects of many different types of tissue paper. The authors of a study for Studio Sound magazine suggested that had the speakers' grilles been used in studios, they would have had the same effect on the treble output as the improvised tissue paper filter. Another tissue study found inconsistent results with different paper, but said that tissue paper generally demonstrated an undesirable effect known as "comb filtering", where the high frequencies are reflected back into the tweeter instead of being absorbed. The author derided the tissue practice as "aberrant behavior", saying that engineers usually fear comb filtering and its associated cancellation effects, suggesting that more controllable and less random electronic filtering would be preferable. Road repair Tissue paper, in the form of standard single-ply toilet paper, is commonly used in road repair to protect crack sealants. The sealants require upwards of 40 minutes to cure enough to not stick onto passing traffic. The application of toilet paper removes the stickiness and keeps the tar in place, allowing the road to be reopened immediately and increasing road repair crew productivity. The paper breaks down and disappears in the following days. The use has been credited to Minnesota Department of Transportation employee Fred Muellerleile, who came up with the idea in 1970 after initially trying standard office paper, which worked, but did not disintegrate easily. Packing industry Apart from above, a range of speciality tissues are also manufactured to be used in the packing industry. These are used for wrapping/packing various items, cushioning fragile items, stuffing in shoes/bags etc. to keep shape intact or, for inserting in garments etc. while packing/folding to keep them wrinkle free and safe. It is generally used printed with the manufacturers brand name or, logo to enhance the look and aesthetic appeal of the product. It is a type of thin, translucent paper generally in the range of grammages between 17 and 40 GSM, that can be rough or, shining, hard or soft, depending upon the nature of use. Origami The use of double-tissue, triple-tissue, tissue-foil and Methyl cellulose coated tissue papers are gaining increasing popularity. Due to the paper's low grammage the paper can be folded into intricate models when treated with Methyl Cellulose (also referred to as MC). The inexpensive paper provides incredible paper memory paired with paper strength (when MC treated). Origami models sometimes require both thin and highly malleable papers, for this tissue-foil is considered a prime choice. The industry In North America, people are consuming around three times as much tissue as in Europe. Out of the world's estimated production of of tissue, Europe produces approximately . The European tissue market is worth approximately 10 billion Euros annually and is growing at a rate of around 3%. The European market represents around 23% of the global market. Of the total paper and board market tissue accounts for 10%. An analysis and market research in Europe, Germany was one of the top tissue-consuming countries in Western Europe while Sweden was on top of the per-capita consumption of tissue paper in Western Europe. Market Study. In Europe, the industry is represented by the European Tissue Symposium (ETS), a trade association. The members of ETS represent the majority of tissue paper producers throughout Europe and about 90% of total European tissue production. ETS was founded in 1971 and is based in Brussels since 1992. In the U.S., the tissue industry is organized in the AF&PA. Tissue paper production and consumption is predicted to continue to grow because of factors like urbanization, increasing disposable incomes and consumer spending. In 2015, the global market for tissue paper was growing at per annum rates between 8–9% (China, currently 40% of global market) and 2–3% (Europe). During the COVID-19 pandemic, tissue demand for homes increased dramatically as people spent more time in their homes, while commercial demand for the product decreased. Companies The largest tissue producing companies by capacity – some of them also global players – in 2015 are (in descending order): Essity Kimberly-Clark Georgia-Pacific Asia Pulp & Paper (APP)/Sinar Mas Procter & Gamble Sofidel Group CMPC WEPA Hygieneprodukte Metsä Group Cascades Sustainability The paper industry in general has a long history of accusations for being responsible for global deforestation through legal and illegal logging. The WWF has urged Asia Pulp & Paper (APP), "one of the world's most notorious deforesters" especially in Sumatran rain forests, to become an environmentally responsible company; in 2012, the WWF launched a campaign to remove a brand of toilet paper known to be made from APP fiber from grocery store shelves. According to the Worldwatch Institute, the world per capita consumption of toilet paper was 3.8 kilograms in 2005. The WWF estimates that "every day, about 270,000 trees are flushed down the drain or end up as garbage all over the world", a rate of which about 10% are attributable to toilet paper alone. Meanwhile, the paper tissue industry, along with the rest of the paper manufacturing sector, has worked to minimise its impact on the environment. Recovered fibres now represent some 46.5% of the paper industry's raw materials. The industry relies heavily on biofuels (about 50% of its primary energy). Its specific primary energy consumption has decreased by 16% and the specific electricity consumption has decreased by 11%, due to measures such as improved process technology and investment in combined heat and power (CHP). Specific carbon dioxide emissions from fossil fuels decreased by 25% due to process-related measures and the increased use of low-carbon and biomass fuels. Once consumed, most forest-based paper products start a new life as recycled material or biofuel EDANA, the trade body for the non-woven absorbent hygiene products industry (which includes products such as household wipes for use in the home) has reported annually on the industry's environmental performance since 2005. Less than 1% of all commercial wood production ends up as wood pulp in absorbent hygiene products. The industry contributes less than 0.5% of all solid waste and around 2% of municipal solid waste (MSW) compared with paper and board, garden waste and food waste which each comprise between 18 and 20 percent of MSW. There has been a great deal of interest, in particular, in the use of recovered fibres to manufacture new tissue paper products. However, whether this is actually better for the environment than using new fibres is open to question. A life-cycle assessment study indicated that neither fibre type can be considered environmentally preferable. In this study both new fibre and recovered fibre offer environmental benefits and shortcomings. Total environmental impacts vary case by case, depending on for example the location of the tissue paper mill, availability of fibres close to the mill, energy options and waste utilization possibilities. There are opportunities to minimise environmental impacts when using each fibre type. When using recovered fibres, it is beneficial to: Source fibres from integrated deinking operations to eliminate the need for thermal drying of fibre or long distance transport of wet pulp, Manage deinked sludge in order to maximise beneficial applications and minimise waste burden on society; and Select the recovered paper depending on the end-product requirements and that also allows the most efficient recycling process. When using new fibres, it is beneficial to: Manage the raw material sources to maintain legal, sustainable forestry practices by implementing processes such as forest certification systems and chain of custody standards; and Consider opportunities to introduce new and more renewable energy sources and increase the use of biomass fuels to reduce emissions of carbon dioxide. When using either fibre type, it is beneficial to: Improve energy efficiency in tissue manufacturing; Examine opportunities for changing to alternative, non fossil based sources, of energy for tissue manufacturing operations Deliver products that maximise functionality and optimize consumption; and Investigate opportunities for alternative product disposal systems that minimize the environmental impact of used products. The Confederation of European Paper Industries (CEPI) has published reports focusing on the industry's environmental credentials. In 2002, it noted that "a little over 60% of the pulp and paper produced in Europe comes from mills certified under one of the internationally recognised eco-management schemes". There are a number of ‘eco-labels’ designed to help consumers identify paper tissue products which meet such environmental standards. Eco-labelling entered mainstream environmental policy-making in the late seventies, first with national schemes such as the German Blue Angel programme, to be followed by the Nordic swan (1989). In 1992 a European eco-labelling regulation, known as the EU Flower, was also adopted. The stated objective is to support sustainable development, balancing environmental, social and economical criteria. In 2019, the NRDC and Stand.earth released a report grading various brands of toilet paper, paper towels, and facial tissue; the report criticized major brands for lacking recycled material. Types of eco-labels There are three types of eco-labels, each defined by ISO (International Organization for Standardization). Type I: ISO 14024 This type of eco-label is one where the criteria are set by third parties (not the manufacturer). They are in theory based on life cycle impacts and are typically based on pass/fail criteria. The one that has European application is the EU Flower. Type II: ISO 14021 These are based on the manufacturers or retailers own declarations. Well known amongst these are claims of "100% recycled" in relation to tissue/paper. Type III: ISO 14025 These claims give quantitative details of the impact of the product based on its life cycle. Sometimes known as EPDs (Environmental Product Declarations), these labels are based on an independent review of the life cycle of the product. The data supplied by the manufacturing companies are also independently reviewed. The most well known example in the paper industry is the Paper Profile. You can tell a Paper Profile meets the Type III requirements when the verifiers logo is included on the document. An example of an organization that sets standards is the Forest Stewardship Council.
Technology
Materials
null
2026577
https://en.wikipedia.org/wiki/Snakefly
Snakefly
Snakeflies are a group of predatory insects comprising the order Raphidioptera with two extant families: Raphidiidae and Inocelliidae, consisting of roughly 260 species. In the past, the group had a much wider distribution than it does now; snakeflies are found in temperate regions worldwide but are absent from the tropics and the Southern Hemisphere. Recognizable representatives of the group first appeared during the Early Jurassic. They are a relict group, having reached their apex of diversity during the Cretaceous before undergoing substantial decline. An adult snakefly resembles a lacewing in appearance but has a notably elongated thorax which, together with the mobile head, gives the group their common name. The body is long and slender and the two pairs of long, membranous wings are prominently veined. Females have a large and sturdy ovipositor which is used to deposit eggs in some concealed location. They are holometabolous insects with a four-stage life cycle consisting of eggs, larvae, pupae and adults. In most species, the larvae develop under the bark of trees. They may take several years before they undergo metamorphosis, requiring a period of chilling before pupation takes place. Both adults and larvae are predators of soft-bodied arthropods. Description Adult snakeflies are easily distinguished from similar insects by having an elongated prothorax but not the modified forelegs of the mantis-flies. Most species are between in length. The head is long and flattened, and heavily sclerotised; it may be broad or taper at the back, but is very mobile. The mouthparts are strong and relatively unspecialised, being modified for biting. The large compound eyes are at the sides of the head. Members of the family Inocelliidae have no simple eyes; members of the Raphidiidae do have such eyes, but are mostly differentiated by elimination, lacking the traits found in inocelliids. The prothorax is notably elongated and mobile, giving the group its common name of snakefly. The three pairs of legs are similar in size and appearance. The two pair of dragonfly-like wings are similar in size, with a primitive venation pattern, a thickened leading edge, and a coloured wingspot, the pterostigma. Inocelliids lack a cross vein in the pterostigma that is present in raphidiids. The females in both families typically have a long ovipositor, which they use to deposit their eggs into crevices or under bark. Distribution and habitat Snakeflies are usually found in temperate coniferous forest. They are distributed widely around the globe, the majority of species occurring in Europe and Asia, but also being found in certain regions of Africa, western North America and Central America. In Africa, they are only found in the mountains north of the Sahara Desert. In North America, they are found west of the Rocky Mountains, and range from southwest Canada all the way to the Mexican-Guatemalan Border, which is the furthest south they have been found in the western hemisphere. In the eastern hemisphere, they can be found from Spain to Japan. Many species are found throughout Europe and Asia with the southern edge of their range in northern Thailand and northern India. Snakeflies have a relict distribution, having had a more widespread range and being more diverse in the past; there are more species in Central Asia than anywhere else. In the southern parts of their range, they are largely restricted to higher altitudes, up to around . Even though this insect order is widely distributed, the range of individual species is often very limited and some species are confined to a single mountain range. Life cycle Snakeflies are holometabolous insects, having a four-stage life cycle with eggs, larvae, pupae and adults. Before mating, the adults engage in an elaborate courtship ritual, including a grooming behaviour involving legs and antennae. In raphidiids, mating takes place in a "dragging position", while in inocelliids, the male adopts a tandem position under the female; copulation may last for up to three hours in some inoceliid species. The eggs are oviposited into suitable locations and hatch in from a few days to about three weeks. The larvae have large heads with projecting mandibles. The head and the first segment of the thorax are sclerotised, but the rest of the body is soft and fleshy. They have three pairs of true legs, but no prolegs. However, they do possess an adhesive organ on the abdomen, which they can use to fasten themselves to vertical surfaces. There is no set number of instars the larvae will go through, some species can have as many as ten or eleven. The larval stage usually lasts for two to three years, but in some species can extend for six years. The final larval instar, the prepupal stage, creates a cell in which the insect pupates. The pupa is able to bite when disturbed, and shortly before the adult emerges, it gains the ability to walk and often leaves its cell for another location. All snakeflies require a period of cool temperatures (probably around ) to induce pupation. The length of the pupation stage is variable. Most species pupate in the spring or early summer, and take a few days to three weeks before ecdysis. If the larvae begin pupation in the late summer or early fall, they can take up to ten months before the adults emerge. Insects reared at constant temperatures in a laboratory may become "prothetelous", developing the compound eyes and wingpads of pupae, but living for years without completing metamorphosis. Ecology Adult snakeflies are territorial and carnivorous organisms. They are diurnal and are important predators of aphids and mites. Pollen has also been found in the guts of these organisms and it is unclear whether they require pollen for part of their lifecycle or if it is a favoured food source. The larvae of many raphidiids live immediately below the bark of trees, although others live around the tree bole, in crevices in rocks, among leaf litter and in detritus. Here they feed on the eggs and larvae of other arthropods such as mites, springtails, spiders, barklice, sternorrhynchids and auchenorrhynchids. The actual diets of the larvae vary according to their habitats, but both larvae and adults are efficient predators. Predators of snakeflies include birds; in Europe, these are woodland species such as the treecreeper, great spotted woodpecker, wood warbler, nuthatch, and dunnock, as well as generalist insect-eating species such as the collared flycatcher. Typically 5-15% of snakefly larvae are parasitized, mainly by parasitoid wasps, but rates as high as 50% have been observed in some species. Evolution During the Mesozoic era (252 to 66 mya), there was a large and diverse fauna of Raphidioptera as exemplified by the abundant fossils that have been found in all parts of the world. This came to an abrupt end at the end of the Cretaceous period, likely as a result of the Cretaceous–Paleogene extinction event (66 mya) when an enormous asteroid is thought to have hit the Earth. This seems to have extinguished all but the most cold-tolerant species of snakefly, resulting in the extinction of the majority of families, including all the tropical and sub-tropical species. The two families of present-day Raphidioptera are thus relict populations of this previously widespread group. They have been considered living fossils, because modern-day species closely resemble species from the early Jurassic period (140 mya). There are about 260 extant species. Fossil history Several extinct families are known only from fossils dating from the Lower Jurassic to the Miocene, the great majority of them belonging to the suborder Raphidiomorpha. The transitional Middle Jurassic Juroraphidiidae form a clade with the Raphidiomorpha. Phylogeny Molecular analysis using mitochondrial RNA and the mitogenome has clarified the group's phylogeny within the Neuropterida, as shown in the cladogram. The name Raphidioptera is formed from Greek ῥαφίς (raphis), meaning needle, and πτερόν (pteron), meaning wing. The Megaloptera, Neuroptera (in the modern sense) and Raphidioptera are very closely related, forming the group Neuropterida. This is either placed at superorder rank, with the Holometabola – of which they are part – becoming an unranked clade above it, or the Holometabola are maintained as a superorder, with an unranked Neuropterida being a part of them. Within the holometabolans, the closest living relatives of Neuropterida are the beetles. Two suborders of Raphidioptera and their families are grouped below according to Engel (2002) with updates according to Bechly and Wolf-Schwenninger (2011) and Ricardo Pérez-de la Fuente et al. (2012). For lists of genera, see the articles on the individual families. Raphidioptera †Priscaenigmatomorpha ?Genus Chrysoraphidia - Yixian Formation, China Early Cretaceous (Aptian) (some authors have suggested closer affinities to Neuroptera) Family †Priscaenigmatidae - (Early Jurassic-Early Cretaceous) Genus †Hondelagia - Green Series, Germany, Early Jurassic (Toarcian) Genus †Priscaenigma - Charmouth Mudstone Formation, United Kingdom, Early Jurassic (Sinemurian) Genus †Sukachevia - Karabastau Formation, Kazakhstan, Late Jurassic Genus †Cretohondelagia - Khasturty locality, Russia, Early Cretaceous (Aptian) Family †Juroraphidiidae Genus †Juroraphidia - Jiulongshan Formation, China, Middle Jurassic Raphidiomorpha Engel, 2002 Family †Metaraphidiidae - (Early Jurassic) Genus †Metaraphidia - Charmouth Mudstone Formation, United Kingdom, Early Jurassic (Sinemurian) Posidonia Shale, Early Jurassic (Toarcian) Family †Baissopteridae - (Cretaceous-Eocene) Genus †Allobaissoptera - Burmese amber, Mid Cretaceous (Albian-Cenomanian) Genus †Ascalapharia - Kzyl-Zhar, Kazakhstan, Late Cretaceous (Turonian) Genus †Austroraphidia - Crato Formation, Brazil, Early Cretaceous (Aptian) Genus †Baissoptera - Crato Formation, Brazil, Yixian Formation, China, Zaza Formation, Russia Early Cretaceous (Aptian), Spanish amber, Early Cretaceous (Albian) Burmese amber Genus †Burmobiassoptera - Burmese amber, Mid Cretaceous (Albian-Cenomanian) Genus †Cretoraphidia - Zaza Formation, Russia Early Cretaceous (Aptian) Genus †Cretoraphidiopsis - Dzun-Bain Formation, Mongolia, Early Cretaceous (Aptian) Genus †Dictyoraphidia - Florissant Formation, Colorado, United States, Eocene (Priabonian) Genus †Electrobaissoptera - Burmese amber, Mid Cretaceous (Albian-Cenomanian) Genus †Lugala - Dzun-Bain Formation, Mongolia, Early Cretaceous (Aptian) Genus †Microbaissoptera - Yixian Formation, China, Early Cretaceous (Aptian) Genus †Rhynchobaissoptera - Burmese amber, Mid Cretaceous (Albian-Cenomanian) Genus †Stenobaissoptera - Burmese amber, Mid Cretaceous (Albian-Cenomanian) Family †Mesoraphidiidae (Paraphyletic) (30+ genera) - (Middle Jurassic-Late Cretaceous) Neoraphidioptera Engel, 2007- (Paleogene-Recent) Family Inocelliidae Subfamily †Electrinocelliinae Subfamily Inocelliinae Family Raphidiidae Incertae sedis Genus †Arariperaphidia - (Lower Cretaceous; Brazil) Possible biological pest control agents Snakeflies have been considered a viable option for biological control of agricultural pests. The main advantage is that they have few predators, and both adults and larvae are predacious. A disadvantage is that snakeflies have a long larval period, so their numbers increase only slowly, and it could take a long time to rid crops of pests; another issue is that they prey on a limited range of pest species. An unidentified North American species was introduced into Australia and New Zealand in the early twentieth century for this purpose, but failed to become established.
Biology and health sciences
Insects: General
Animals
2028368
https://en.wikipedia.org/wiki/Mustard%20plant
Mustard plant
The mustard plant is any one of several plant species in the genera Brassica, Rhamphospermum and Sinapis in the family Brassicaceae (the mustard family). Mustard seed is used as a spice. Grinding and mixing the seeds with water, vinegar, or other liquids creates the yellow condiment known as prepared mustard. The seeds can also be pressed to make mustard oil, and the edible leaves can be eaten as mustard greens. Many vegetables are cultivated varieties of mustard plants; domestication may have begun 6,000 years ago. History Although some varieties of mustard plants were well-established crops in Hellenistic and Roman times, Zohary and Hopf note, "There are almost no archeological records available for any of these crops." Wild forms of mustard and its relatives, the radish and turnip, can be found over West Asia and Europe, suggesting their domestication took place somewhere in that area. However, Zohary and Hopf conclude: "Suggestions as to the origins of these plants are necessarily based on linguistic considerations." The Encyclopædia Britannica states that mustard was grown by the Indus Civilization of 2500–1700 BC. According to the Saskatchewan Mustard Development Commission, "Some of the earliest known documentation of mustard's use dates back to Sumerian and Sanskrit texts from 3000 BC". A wide-ranging genetic study of B. rapa announced in 2021 concluded that the species may have been domesticated as long as 6,000 years ago in Central Asia, and turnips or oilseeds might have been the first product. The results also suggested that a taxonomic re-evaluation of the species might be needed. Species White mustard (Sinapis alba) grows wild in North Africa, West Asia, and Mediterranean Europe, and has spread further by long cultivation; brown mustard (Brassica juncea), initially from the foothills of the Himalayas, is grown commercially in India, Canada, the United Kingdom, Denmark, Bangladesh and the United States; black mustard (Brassica nigra) is grown in Argentina, Chile, the US, and some European countries. Canada and Nepal are the world's major producers of mustard seed, between them accounting for around 57% of world production in 2010. White mustard is commonly used as a cover crop in Europe (between UK and Ukraine). Many varieties exist, e.g., in Germany and the Netherlands, mostly differing in lateness of flowering and resistance against white beet-cyst nematode (Heterodera schachtii). Farmers prefer late-flowering varieties, which do not produce seeds; they may become weeds in the subsequent year. Early vigor is important to cover the soil quickly and suppress weeds and protect the soil against erosion. In rotations with sugar beets, suppression of the white beet-cyst nematode is an important trait. Resistant white mustard varieties reduce nematode populations by 70–90%. Mustard species' are a common host plant to Phaedon cochleariae, a beetle native to Europe. Due to their particular diet, they have been colloquially referred to as mustard leaf beetles. Recent research has studied varieties of mustards with high oil contents for use in the production of biodiesel, a renewable liquid fuel similar to diesel fuel. The biodiesel made from mustard oil has good flow properties and cetane ratings. The leftover meal after pressing out the oil has also been found to be an effective pesticide. Gallery
Biology and health sciences
Herbs and spices
Plants
2029258
https://en.wikipedia.org/wiki/Embioptera
Embioptera
The order Embioptera, commonly known as webspinners or footspinners, are a small group of mostly tropical and subtropical insects, classified under the subclass Pterygota. The order has also been called Embiodea or Embiidina. More than 400 species in 11 families have been described, the oldest known fossils of the group being from the mid-Jurassic. Species are very similar in appearance, having long, flexible bodies, short legs, and only males having wings. Webspinners are gregarious, living subsocially in galleries of fine silk which they spin from glands on their forelegs. Members of these colonies are often related females and their offspring; adult males do not feed and die soon after mating. Males of some species have wings and are able to disperse, whereas the females remain near where they were hatched. Newly mated females may vacate the colony and found a new one nearby. Others may emerge to search for a new food source to which the galleries can be extended, but in general, the insects rarely venture from their galleries. Name and etymology The name Embioptera ("lively wings") comes from Greek (), meaning "lively", and (), meaning "wing", a name that has not been considered to be particularly descriptive for this group of fliers, perhaps instead referring to their remarkable speed of movement both forward and backward. The common name webspinner comes from the insects' unique tarsi on their front legs, which produce multiple strands of silk. They use the silk to make web-like galleries in which they live. Early entomologists considered the webspinners to be a group within the termites or the neuropterans and a variety of group names have been suggested including Adenopoda, Embidaria, Embiaria, and Aetioptera. In 1909 Günther Enderlein used the name Embiidina which was used widely for a while. Edward S. Ross suggested a new name, Embiomorpha in 2007. The currently most-widely accepted ordinal name is Embioptera, suggested by Arthur Shipley in 1904. Evolution Fossil history Fossils of webspinners are rare. The group probably first appeared during the Jurassic; the oldest known, Sinembia rossi and Juraembia ningchengensis, both in a new family Sinembiidae created for them, are from the Middle Jurassic of Inner Mongolia, and were described in 2009. The female of J. ningchengensis had wings, supporting Ross's proposal that both sexes of ancestral Embioptera were winged. Species such as Atmetoclothoda orthotenes, possibly the first fossil member of the Clothodidae to be discovered, sometimes thought to be a "primitive" family, have been found in mid-Cretaceous amber from northern Myanmar. Litoclostes delicatus (Oligotomidae) has been found in the same locality. The largest number of fossils have been found in mid-Eocene Baltic amber and early-Miocene Dominican amber. Flattened compression fossils that have been interpreted as being webspinners have been found from the Eocene/Oligocene shales of Florissant, Colorado. Phylogeny Over 400 embiopteran species in 11 families have been described worldwide, the largest proportion of which inhabit tropical regions. It is estimated that there may be around 2000 species extant today. The external phylogeny of Embioptera has been debated, with the polyneopteran order controversially classed in 2007 as a sister group to both Zoraptera (angel insects) and Phasmatodea (stick insects). The position of the Embioptera within the Polyneoptera suggested by a phylogenetic analysis carried out in 2012 by Miller et al., combining morphological and molecular evidence, is shown in the cladogram. The internal phylogeny of the group is not yet fully resolved. Miller et al.'s phylogenetic analysis examined 96 morphological characters and 5 genes for 82 species across the order. Four families were found to be robustly monophyletic in whatever way the phylogeny was analysed (parsimony, maximum likelihood, or Bayesian): Clothodidae, Anisembiidae, Oligotomidae, and Teratembiidae. The Embiidae, Scelembiidae, and Australembiidae remain monophyletic in one or more of the three analyses, but are broken up in others, so their status remains uncertain. Either the Clothodidae (under parsimony analysis) or Australembiidae (under Bayesian analysis) is the sister taxon to the remaining Embioptera taxa, so no single phylogenetic tree can be taken as definitive from this work. Description All webspinners have a remarkably similar body form, although they do vary in coloration and size. The majority are brown or black, ranging to pink or reddish shades in some species, and range in length from . The body form of these insects is completely specialised for the silk tunnels and chambers in which they reside, being cylindrical, long, narrow and highly flexible. The head has projecting mouthparts with chewing mandibles. The compound eyes are kidney-shaped, there are no ocelli, and the thread-like antennae are long, with up to 32 segments. The antennae are flexible, so they do not become entangled in the silk, and the wings have a crosswise crease, allowing them to fold forwards and enable the male to dart backwards without the wings snagging the fabric. The first segment of the thorax is small and narrow, while the second and third are larger and broader, especially in the males, where they include the flight muscles. All the females and nymphs are wingless, whereas adult males can be either winged or wingless depending on species. The wings, where present, occur as two pairs that are similar in size and shape: long and narrow, with relatively simple venation. These wings operate using basic hydraulics; pre-flight, chambers (sinus veins) within the wings inflate with hemolymph (blood), making them rigid enough for use. On landing, these chambers deflate and the wings become flexible, folding back against the body. Wings can also fold forwards over the body, and this, along with the flexibility allows easy movement through the narrow silk galleries, either forwards or backwards, without resulting in damage. In both males and females the legs are short and sturdy, with an enlarged basal tarsomere on the front pair, containing the silk-producing glands; the mid and hind legs also have three tarsal segments with the hind femur enlarged to house the strong tibial depressor muscles that enable rapid reverse movement. It is these silk glands on the front tarsi that distinguish the embiopterans; other noteworthy characteristics of this group include three-jointed tarsi, simple wing venation with few cross veins, prognathous (head with forward-facing mouthparts), and absence of ocelli (simple eyes). The abdomen has ten segments, with a pair of cerci on the final segment. These cerci, made up of two segments and asymmetric in length especially in the males are highly sensitive to touch, and allow the animal to navigate while moving backwards through the gallery tunnels, which are too narrow to allow the insect to turn round. Because morphology is so similar between taxa, species identification is extremely difficult. For this reason, the main form of taxonomic identification used in the past has been close observation of distinctive copulatory structures of males, (although this method is now thought by some entomologists and taxonomists as giving insufficient classification detail). Although males never eat during their adult stage, they do have mouthparts similar to those of the females. These mouthparts are used to hold onto the female during copulation. Life cycle The eggs hatch into nymphs that resemble small, wingless adults. After a short period of parental care, the nymphs undergo hemimetabolosis (incomplete metamorphosis), moulting a total of four times before reaching adult form. Adult males never eat, and leave the home colony almost immediately to find a female and mate. Those males that cannot fly often mate with females in nearby colonies, meaning their chosen mates are often siblings or close relatives. In some species, the female eats the male after mating, but in any event, the male does not survive for long. A few species are parthenogenetic, meaning they can produce viable offspring without fertilisation of the eggs. This phenomenon occurs when a female is, for whatever reason, unable to find a male to mate with, thus giving her and her species reproductive security at all times. After moulting and mating, the female lays a single batch of eggs either within the existing gallery, or wanders away to start a new colony elsewhere. Because the females are flightless, their potential for dispersal is limited to the distance a female can walk. Behaviour and ecology Behaviour Most, if not all, embiopteran species are gregarious but subsocial. Typically, adult females show maternal care of their eggs and young, and often live in large colonies with other adult females, creating and sharing the webbing cover that helps to protect them against predators. The advantages of living in these colonies outweigh the disadvantage that results from the increased parasite load that this lifestyle entails. Although some species breed once a year, or even once in two years, others breed more frequently, with Aposthonia ceylonica producing four or five batches of eggs in a twelve-month period. Maternal care starts with the placement of the eggs. Some species attach batches of eggs to the web structure with silk; others form the eggs into rows in grooves excavated in the bark; others fix them in rows with a cement formed from saliva, while many species bury them in a mass of silk, even incorporating other materials into the covering. The majority of embiopterans guard their eggs, some actually standing over them, the main exception being species such as Saussurembia calypso that scatter their eggs widely. The main threat to the eggs is from egg parasitoids, which can attack whole batches of undefended eggs. At this time the adult females become very territorial and aggressive to other individuals with whom they previously lived in harmony; three different types of vibratory signals are used to deter other embiopterans that approach the eggs too closely, and the intruder usually retires. After the eggs have hatched, the mothers resume their gregarious behaviour. In some species, they continue caring for their young for several days after hatching, and in a few, this parental care even involves the female feeding the nymphs with portions of chewed-up leaf litter and other foods. The parthenogenetic Rhagadochir virgo incorporates scraps of lichen into the silk wrapping the eggs, and this may be eaten by newly hatched nymphs. Perhaps because individuals of this species are so closely related, the adults spin silk together and move around in coordinated groups. Even in species that provide no further parental care, the nymphs in the colony benefit from the greater silk-producing power of the adults and the extra protection that the more copious silk covering brings. Subsociality is a trade-off for the female, as the energy and time that is exerted in caring for her young is rewarded by giving them a much greater chance of surviving and carrying on her genetic lineage. Some species do share galleries with more than one adult, however, most groups consist of one adult female and her offspring. When webspinners clean their antennae, they may differ in their behavior from other insects which typically make use of the forelegs to either clean or bring the antennae toward the mouthparts for manipulation. Webspinners (as observed in the genus Oligembia) instead fold the antennae under the body and clean the antennae as they are held between the mouthparts and the substrate. When constructing their silken galleries, webspinners use characteristic cyclic movements of their forelegs, alternating actions with the left and right legs while also moving. There are variations in the choreography of these movements across species. Silk web production Embiopterans produce a silk thread similar to that produced by the silkworm, Bombyx mori. The silk is produced in spherical secretory glands in the swollen tarsi (lower leg segments) of the forelimbs, and can be produced by both adults and larvae. Unlike Bombyx mori and other silk-producing (and spinning) members of both Lepidoptera and Hymenoptera, which only have one pair of silk glands per individual, some species of embiid are estimated to have up to 300 silk glands: 150 in each forelimb. These glands are linked to a bristle-like cuticular process known as a silk ejector, and their exceedingly high numbers allow individuals to spin large amounts of silk very quickly, creating extensive galleries. The silk web is produced throughout all stages of the embiopteran lifespan, and requires modest energy output. Webspinner silk is among the thinnest of all animal silks, being in most species about 90 to 100 nanometres in diameter. The finest of any insect are those of the webspinner Aposthonia gurneyi, averaging about 65 nanometres in diameter. Each thread consists of a protein core folded into pleated beta-sheets, with a water-repellent coating rich in waxy alkanes. Galleries The galleries produced by embiopterans are tunnels and chambers woven from the silk they produce. These woven constructions can be found on substrates such as rocks and the bark of trees, or in leaf litter. Some species camouflage their galleries by decorating the outer layers with bits of leaf litter or other materials to match their surroundings. The galleries are essential to their life cycle, maintaining moisture in their environment, and also offering protection from predators and the elements while foraging, breeding and simply existing. Embiopterans only leave the gallery complex in search of a mate, or when females explore the immediate area in search of a new food source. Webspinners continually extend their galleries to reach new food sources, and expand their existing galleries as they grow in size. The insects spin silk by moving their forelegs back and forth over the substrate, and rotating their bodies to create a cylindrical, silk-lined tunnel. Older galleries have multiple laminate layers of silk. Each gallery complex contains several individuals, often descended from a single female, and forms a maze-like structure, extending from a secure retreat into whatever vegetable food matter is available nearby. The size and complexity of the colony vary between species, and they can be very extensive in those species that live in hot and humid climates. Diet The embiopteran diet varies between species, with available food sources changing with varying habitat. The nymphs and adult females feed on plant litter, bark, moss, algae and lichen. They are generalist herbivores; during his research, Ross maintained a number of species in the laboratory on a diet of lettuce and dry oak leaves. Adult males do not eat at all, dying soon after mating. Parasites and predators The Sclerogibbidae are a small family of aculeate wasps that are specialist parasites of embiopterans. The wasp lays an egg on the abdomen of a nymph. The wasp larva emerges and attaches itself to the host's body, consuming the host's tissues as it grows. It eventually forms a cocoon and drops off the carcass. A Neotropical tachinid fly, Perumyia embiaphaga, and a braconid wasp species in the genus Sericobracon, are known to be parasitoids of adult embioptera. A few scelionid wasps in the tribe Embidobiini are egg parasitoids of the Embioptera. A protozoan parasite in Italy effectively sterilises males, forcing the remaining female population to become parthenogenetic. These parasites and agents of disease may put evolutionary pressure on embiopterans to live more socially. Adult webspinners are vulnerable when they emerge from their galleries, and are preyed on by birds, geckos, ants and spiders. They have been observed being attacked by owlfly larvae. Birds may pull sheets of silk off the galleries to expose their prey, ants may cut holes to gain entry and harvestmen may pierce the silk to feed on the webspinners inside. Associates Another group of associates inside the galleries are bugs in the family Plokiophilidae. Whether these are feeding on embiopteran eggs or larvae, on mites and other residents of the gallery, or are scavenging is unclear. The embiopteran Aposthonia ceylonica has been found living inside a colony of the Indian cooperative spider, probably feeding on algae growing on the spider sheetweb, and two webspinner species have been discovered living in the outer covering of termites' nests, where their silk galleries may protect them from attack. Distribution and habitat Embiopterans are distributed worldwide, and are found on every continent except Antarctica, with the highest density and diversity of species being in tropical regions. Some common species have been accidentally transported to other parts of the world, while many native species are unobtrusive and yet to be detected. Some species live underground, or concealed under rocks or behind sections of loose bark. Others live out in the open, either swathed in sheets of white or blue silk, or hidden in less-conspicuous silken tubes, on the ground, on the trunks of trees or on the surface of granite rocks. Largely restricted to warmer locations, webspinners are found as far north as the state of Virginia in the United States (38°N), and as high as in Ecuador. They were absent from Britain until 2019, when Aposthonia ceylonica, a southeast Asian species, was found in a glasshouse at the RHS Garden, Wisley.
Biology and health sciences
Insects: General
Animals
2030304
https://en.wikipedia.org/wiki/Nail%20gun
Nail gun
A nail gun, nailgun or nailer is a form of hammer used to drive nails into wood or other materials. It is usually driven by compressed air (pneumatic), electromagnetism, highly flammable gases such as butane or propane, or, for powder-actuated tools, a small explosive charge. Nail guns have in many ways replaced hammers as tools of choice among builders. The nail gun was designed by Morris Pynoos, a civil engineer by training, for his work on Howard Hughes' Hughes H-4 Hercules (known as the Spruce Goose). The wooden fuselage was nailed together and glued, and then the nails were removed. The first nail gun used air pressure and was introduced to the market in 1950 to speed the construction of housing floor sheathing and sub-floors. With the original nail gun, the operator used it while standing and could nail 40 to 60 nails a minute. It had a capacity of 400 to 600 nails. Use Nail guns use fasteners mounted in long clips (similar to a stick of staples) or collated in a paper or plastic carrier, depending on the design of the nail gun. Some full head nail guns, especially those used for pallet making and roofing, use long plastic or wire collated coils. Some strip nailers use a clipped head so the nails can be closer together, which allows less frequent reloading. Clip head nails are sometimes banned by state or local building codes. Full Round Head nails and ring shank nails provide greater resistance to pull out. Nailers may also be of the 'coil' type where the fasteners come in wire or plastic collation, to be used with nail guns with a drum magazine; the advantage is many more fasteners per load, but at the expense of extra weight. Industrial nailers designed for use against steel or concrete may have a self-loading action for the explosive caps, but most need nails to be loaded by hand. Nail guns vary in the length and gauge (thickness) of nails they can drive. The smallest size of fasteners are normally 23 gauge ( in diameter), commonly called "pin nailers" and generally have only a minimal head. They are used for attaching everything from beadings, mouldings and so forth to furniture all the way up to medium-sized baseboard, crown molding and casing. Lengths are normally in the range , although some industrial tool manufacturers supply up to . The 23 gauge micro pin is rapidly gaining ground as users find that it leaves a much smaller hole than brad nails, thereby eliminating the time normally taken to fill holes and presenting a far better looking finished product. The next size up is the 18 gauge (1.02 mm diameter) fixing, often referred to as a "brad". These fastenings are also used to fix mouldings but can be used in the same way as the smaller 22 to 24 gauge fastenings. Their greater strength leads to their use in trim carpentry on hardwoods where some hole filling is acceptable. Most 18 gauge brads have heads, but some manufacturers offer headless fastenings. Lengths range from . The next sizes are 16 and 15 gauge (1.63 and 1.83 mm diameter). These are generally referred to as "finish nails". They come in lengths between and are used in the general fixing of much softwood and MDF trim work (such as baseboard/skirtings, architraves, etc.) where the holes will be filled and the work painted afterwards. The largest sizes of conventional collated fastenings are the clipped head and full head nails which are used in framing, fencing and other forms of structural and exterior work. These nails generally have a shank diameter of although some manufacturers offer smaller diameter nails as well. General lengths are in the range . Shank styles include plain, ring annular, twisted, etc. and a variety of materials and finishes are offered including plain steel, galvanized steel, sherardised steel, stainless steel, etc. depending on the pull-out resistance, corrosion resistance, etc. required for the given application. These sizes of fastenings are available in stick collated form (often 20° to 21° for full head, 28° to 34° for clipped head) or coil form (for use in pallet/roofing nailers) depending on the application. Full-head nails have greater pull-out resistance than clipped head nails and are mandated by code in many hurricane zones for structural framing. Another type of fastening commonly found in construction is the strap fastening which is roughly analogous to the large head clout nail. These are used in conjunction with a strap shot nailer (or positive placement nailer UK) to fix metalwork such as joist hangers, corner plates, strengthening straps, etc. to timber structures. They differ from conventional nailers in that the point of the fastening is not sheathed so it can be exactly positioned before firing the nail gun. Other specialist nailers are also available which can drive spikes up to long, fix wood to steel, etc. A palm nailer is a small, lightweight tool, typically pneumatic, which fits into the palm of one hand. It is convenient for working in tight spaces and can drive both short and long nails. Repeated hammer blows (of around 40 hits per second) rather than a single strike drive the fastener. Safety In the United States, about 42,000 people every year go to emergency rooms with injuries from nail guns, according to the U.S. Centers for Disease Control (CDC). 40% of those injuries occur to consumers. Nail gun injuries tripled between 1991 and 2005. Foot and hand injuries are among the most common. The U.S. Consumer Product Safety Commission estimates that treating nail gun wounds costs at least $338 million per year nationally in emergency medical care, rehabilitation, and workers' compensation. Often personnel selling the tools know little about the dangers associated with their use or safety features that can prevent injuries. Injuries to the fingers, hands, and feet are among the three most common, but there are also injuries that involve other body areas such as arms and legs as well as internal organs. Some of these injuries are serious and some have resulted in death. All kinds of nail guns can be dangerous, so safety precautions similar to those for a firearm are usually recommended for their use. For safety, nail guns are designed to be used with the muzzle contacting the target. Unless specifically modified for the purpose, they are not effective as a projectile weapon. The most common firing mechanism is the dual-action contact-trip trigger, which requires that the manual trigger and nose contact element both be depressed for a nail to be discharged. The sequential-trip trigger, which is safer, requires the nose contact to be depressed before the manual trigger, rather than simultaneously with the trigger. Approximately 65% to 69% of injuries from contact-trip tools could be prevented through the use of a sequential-trip trigger, according to the CDC. There is recoil associated with the discharge of a nail from a nail gun. Contact triggers allow the gun to fire unintended nails if the nose hits the wood surface or a previously placed nail following recoil. Nailers with touch tip (contact) triggers are susceptible to this double firing. According to a 2002 engineering report from the Consumer Products Safety Commission (CPSC), the recoil and firing of the second nail occurs well before the trigger can be released. Acute injury rates are twice as high among users of tools with contact triggers. In September 2011 The Occupational Safety and Health Administration (OSHA) and the National Institute for Occupational Safety and Health (NIOSH) issued a nail gun safety guide that details practical steps to prevent injuries including use of tools with sequential triggers, training prior to use, and use of appropriate protective equipment such as eye protection. In June 2013, NIOSH released an instructional comic providing information on nail gun hazards and ways to use the device properly. Research aimed at reducing nail gun accidents among frame carpenters, among the heaviest users of nail guns, is ongoing. Types Pneumatic The most popular type of nail gun, which uses compressed air to drive its fasteners. Thus, it is dependent on both a compressor and a power source to operate the compressor (either direct electrical current or an on-site generator), and encumbered in use by a cumbersome high-pressure air hose (which may be extremely stiff in cold weather, and retain an annoying "memory" of having been coiled at any time). A pneumatic nail gun is also limited by the size and rebound rate of its compressor in the number of fasteners it can drive consecutively. Historically pneumatic air guns required daily oiling (at a minimum), though "oil-free" versions are also produced. Powder-actuated Propellant-powered ("powder-actuated") nail guns fall into two broad categories: Direct drive or high velocity devices. This uses gas pressure acting directly on the nail to drive it. Indirect drive or low velocity devices. This uses gas pressure acting on a heavy piston which drives the nail. Indirect drive nailers are safer because they cannot launch a free-flying projectile even if tampered with or misused, and the lower velocity of the nails is less likely to cause explosive shattering of the work substrate. Either type can, with the right cartridge loads, be very powerful, driving a nail or other fastener into hard concrete, stone, hardwood, steel, etc., with ease. Combustion powered Powered by a gas (e.g. propane) and air explosion in a small cylinder; the piston pushes the nail directly and there are no rotating parts. Electric In one type of electric nail gun, a rotating electric motor gradually compresses a powerful spring and suddenly releases it. Solenoid-powered Here a solenoid propels a metal piston, which has a long front rod which propels the nail. The solenoid tends to attract the piston or projectile towards the middle of the solenoid. If a series of solenoids is used (which makes the nail gun into a type of coilgun), to get more power, each solenoid must be switched off when the piston has reached the middle of the solenoid. In multi-solenoid coilguns a short burst of power from a big capacitor (one attached to each solenoid) comes at the right time to propel the piston or projectile. For more information see Coilguns for ferromagnetic projectiles. Pin nailer A pin nailer is a type of nail gun that drives simple pin-like fasteners as substitutes for finish nails. Pin nailers are often used on molding for furniture, cabinets, and interior millwork. They can also work as temporary fasteners for pieces with irregular shapes that are impossible to hold down with a clamp securely. Gallery
Technology
Hydraulics and pneumatics
null
2031045
https://en.wikipedia.org/wiki/Hardware%20acceleration
Hardware acceleration
Hardware acceleration is the use of computer hardware designed to perform specific functions more efficiently when compared to software running on a general-purpose central processing unit (CPU). Any transformation of data that can be calculated in software running on a generic CPU can also be calculated in custom-made hardware, or in some mix of both. To perform computing tasks more efficiently, generally one can invest time and money in improving the software, improving the hardware, or both. There are various approaches with advantages and disadvantages in terms of decreased latency, increased throughput, and reduced energy consumption. Typical advantages of focusing on software may include greater versatility, more rapid development, lower non-recurring engineering costs, heightened portability, and ease of updating features or patching bugs, at the cost of overhead to compute general operations. Advantages of focusing on hardware may include speedup, reduced power consumption, lower latency, increased parallelism and bandwidth, and better utilization of area and functional components available on an integrated circuit; at the cost of lower ability to update designs once etched onto silicon and higher costs of functional verification, times to market, and the need for more parts. In the hierarchy of digital computing systems ranging from general-purpose processors to fully customized hardware, there is a tradeoff between flexibility and efficiency, with efficiency increasing by orders of magnitude when any given application is implemented higher up that hierarchy. This hierarchy includes general-purpose processors such as CPUs, more specialized processors such as programmable shaders in a GPU, fixed-function implemented on field-programmable gate arrays (FPGAs), and fixed-function implemented on application-specific integrated circuits (ASICs). Hardware acceleration is advantageous for performance, and practical when the functions are fixed, so updates are not as needed as in software solutions. With the advent of reprogrammable logic devices such as FPGAs, the restriction of hardware acceleration to fully fixed algorithms has eased since 2010, allowing hardware acceleration to be applied to problem domains requiring modification to algorithms and processing control flow. The disadvantage, however, is that in many open source projects, it requires proprietary libraries that not all vendors are keen to distribute or expose, making it difficult to integrate in such projects. Overview Integrated circuits are designed to handle various operations on both analog and digital signals. In computing, digital signals are the most common and are typically represented as binary numbers. Computer hardware and software use this binary representation to perform computations. This is done by processing Boolean functions on the binary input, and then outputting the results for storage or further processing by other devices. Computational equivalence of hardware and software Because all Turing machines can run any computable function, it is always possible to design custom hardware that performs the same function as a given piece of software. Conversely, software can always be used to emulate the function of a given piece of hardware. Custom hardware may offer higher performance per watt for the same functions that can be specified in software. Hardware description languages (HDLs) such as Verilog and VHDL can model the same semantics as software and synthesize the design into a netlist that can be programmed to an FPGA or composed into the logic gates of an ASIC. Stored-program computers The vast majority of software-based computing occurs on machines implementing the von Neumann architecture, collectively known as stored-program computers. Computer programs are stored as data and executed by processors. Such processors must fetch and decode instructions, as well as load data operands from memory (as part of the instruction cycle), to execute the instructions constituting the software program. Relying on a common cache for code and data leads to the "von Neumann bottleneck", a fundamental limitation on the throughput of software on processors implementing the von Neumann architecture. Even in the modified Harvard architecture, where instructions and data have separate caches in the memory hierarchy, there is overhead to decoding instruction opcodes and multiplexing available execution units on a microprocessor or microcontroller, leading to low circuit utilization. Modern processors that provide simultaneous multithreading exploit under-utilization of available processor functional units and instruction level parallelism between different hardware threads. Hardware execution units Hardware execution units do not in general rely on the von Neumann or modified Harvard architectures and do not need to perform the instruction fetch and decode steps of an instruction cycle and incur those stages' overhead. If needed calculations are specified in a register transfer level (RTL) hardware design, the time and circuit area costs that would be incurred by instruction fetch and decoding stages can be reclaimed and put to other uses. This reclamation saves time, power, and circuit area in computation. The reclaimed resources can be used for increased parallel computation, other functions, communication, or memory, as well as increased input/output capabilities. This comes at the cost of general-purpose utility. Emerging hardware architectures Greater RTL customization of hardware designs allows emerging architectures such as in-memory computing, transport triggered architectures (TTA) and networks-on-chip (NoC) to further benefit from increased locality of data to execution context, thereby reducing computing and communication latency between modules and functional units. Custom hardware is limited in parallel processing capability only by the area and logic blocks available on the integrated circuit die. Therefore, hardware is much more free to offer massive parallelism than software on general-purpose processors, offering a possibility of implementing the parallel random-access machine (PRAM) model. It is common to build multicore and manycore processing units out of microprocessor IP core schematics on a single FPGA or ASIC. Similarly, specialized functional units can be composed in parallel, as in digital signal processing, without being embedded in a processor IP core. Therefore, hardware acceleration is often employed for repetitive, fixed tasks involving little conditional branching, especially on large amounts of data. This is how Nvidia's CUDA line of GPUs are implemented. Implementation metrics As device mobility has increased, new metrics have been developed that measure the relative performance of specific acceleration protocols, considering characteristics such as physical hardware dimensions, power consumption, and operations throughput. These can be summarized into three categories: task efficiency, implementation efficiency, and flexibility. Appropriate metrics consider the area of the hardware along with both the corresponding operations throughput and energy consumed. Applications Examples of hardware acceleration include bit blit acceleration functionality in graphics processing units (GPUs), use of memristors for accelerating neural networks, and regular expression hardware acceleration for spam control in the server industry, intended to prevent regular expression denial of service (ReDoS) attacks. The hardware that performs the acceleration may be part of a general-purpose CPU, or a separate unit called a hardware accelerator, though they are usually referred to with a more specific term, such as 3D accelerator, or cryptographic accelerator. Traditionally, processors were sequential (instructions are executed one by one), and were designed to run general purpose algorithms controlled by instruction fetch (for example, moving temporary results to and from a register file). Hardware accelerators improve the execution of a specific algorithm by allowing greater concurrency, having specific datapaths for their temporary variables, and reducing the overhead of instruction control in the fetch-decode-execute cycle. Modern processors are multi-core and often feature parallel "single-instruction; multiple data" (SIMD) units. Even so, hardware acceleration still yields benefits. Hardware acceleration is suitable for any computation-intensive algorithm which is executed frequently in a task or program. Depending upon the granularity, hardware acceleration can vary from a small functional unit, to a large functional block (like motion estimation in MPEG-2). Hardware acceleration units by application
Technology
Computer hardware
null
2823401
https://en.wikipedia.org/wiki/Cavendish%20banana
Cavendish banana
Cavendish bananas are the fruits of one of a number of banana cultivars belonging to the Cavendish subgroup of the AAA banana cultivar group (triploid cultivars of Musa acuminata). The same term is also used to describe the plants on which the bananas grow. They include commercially important cultivars like 'Dwarf Cavendish' (1888) and 'Grand Nain' (the "Chiquita banana"). Since the 1950s, these cultivars have been the most internationally traded bananas. They replaced the Gros Michel banana after it was devastated by Panama disease. They are unable to reproduce sexually, instead being propagated via identical clones. Due to this, the genetic diversity of the Cavendish banana is very low. This, combined with the fact the Cavendish is planted in dense chunks in a monoculture without other natural species to serve as a buffer, makes the Cavendish extremely vulnerable to disease, fungal outbreaks, and genetic mutation, possibly leading to eventual commercial extinction. History of cultivation Cavendish bananas were named after William Cavendish, 6th Duke of Devonshire. Though they were not the first known banana specimens in Europe, in around 1834 Cavendish received a shipment of bananas (from Mauritius) courtesy of the chaplain of Alton Towers (then the seat of the Earls of Shrewsbury). His head gardener and friend, Sir Joseph Paxton, cultivated them in the greenhouses of Chatsworth House. The plants were botanically described by Paxton as Musa cavendishii, after the Duke. For his work, Paxton won a medal at the 1835 Royal Horticultural Society show. The Chatsworth bananas were shipped off to various places in the Pacific around the 1850s. It is believed that some of them may have ended up in the Canary Islands, though other authors believe that the bananas in the Canary Islands had been there since the fifteenth century and had been introduced by early Portuguese explorers who obtained them from West Africa and were later responsible for spreading them to the Caribbean. African bananas in turn were introduced from Southeast Asia into Madagascar by early Austronesian sailors. In 1888, bananas from the Canary Islands were imported into England by Thomas Fyffe. These bananas are now known to belong to the Dwarf Cavendish cultivar. Cavendish bananas entered mass commercial production in 1903 but did not gain prominence until later when Panama disease attacked the dominant Gros Michel ("Big Mike") variety in the 1950s. Because they were successfully grown in the same soils as previously affected Gros Michel plants, many assumed the Cavendish cultivars were more resistant to Panama disease. Contrary to this notion, in mid-2008, reports from Sumatra and Malaysia suggested that Panama disease had started attacking Cavendish cultivars. After years of attempting to keep it out of the Americas, in mid-2019, Panama disease Tropical Race 4 (TR4), was discovered on banana farms in the coastal Caribbean region. With no fungicide effective against TR4, the Cavendish may meet the same fate as the Gros Michel. Taxonomy and nomenclature Cavendish bananas are a subgroup of the triploid (AAA) group cultivars of Musa acuminata. Cavendish cultivars are distinguished by the height of the plant and features of the fruits, and different cultivars may be recognized as distinct by different authorities. The most important clones for fruit production include: 'Dwarf Cavendish', 'Grande Naine', 'Lacatan' (bungulan), 'Poyo', 'Valéry', and 'Williams' under one system of cultivar classification. Another classification includes: 'Double', 'Dwarf Cavendish', 'Extra Dwarf Cavendish', 'Grande Naine', 'Pisang Masak Hijau' (syn 'Lacatan'), and 'Giant Cavendish' as a group of several difficult to distinguish cultivars (including 'Poyo', 'Robusta', 'Valéry', & 'Williams'). 'Grande Naine' is the most important clone in international trade, while 'Dwarf Cavendish' is the most widely grown clone. 'Grande Naine' is also known as Chiquita banana. Uses Cavendish bananas accounted for 47% of global banana production between 1998 and 2000, and the vast majority of bananas entering international trade. The fruits of the Cavendish bananas are eaten raw, used in baking, fruit salads, and to complement foods. The outer skin is partially green when bananas are sold in food markets, and turns yellow when the fruit ripens. As it ripens, the starch is converted to sugars turning the fruit sweet. When it reaches its final stage (stage 7), brown/black "sugar spots" develop. When overripe, the skin turns black and the flesh becomes mushy. Bananas ripen naturally or through an induced process. Once picked, they can turn yellow on their own provided that they are fully mature by the time they are being harvested, or can be exposed to ethylene gas to induce ripening. Bananas which are turning yellow emit natural ethylene which is characterized by the emission of sweet scented esters. Most retailers sell bananas in stages 3–6, with stage 5–7 being the most ideal for immediate consumption. The PLUs used for Cavendish bananas are 4011 (yellow) and 4186 (small yellow). Organic Cavendish bananas are assigned PLU 94011. Diseases Cavendish bananas, accounting for around 99% of banana exports to developed countries, are vulnerable to the fungal disease known as Panama disease. There is a risk of extinction of the variety. Because Cavendish bananas are parthenocarpic (they don't have seeds and reproduce only through cloning), their resistance to disease is often low. Development of disease resistance depends on mutations occurring in the propagation units, and hence evolves more slowly than in seed-propagated crops. The development of resistant varieties has therefore been the only alternative to protect the fruit trees from tropical and subtropical diseases like bacterial wilt and Fusarium wilt, commonly known as Panama disease. A replacement for the Cavendish would likely depend on genetic engineering, which is banned in some countries. Conventional plant breeding has not yet been able to produce a variety that preserves the flavor and shelf-life of the Cavendish. In 2017, James Dale, a biotechnologist at Queensland University of Technology in Brisbane, Australia, produced just such a transgenic banana resistant to Tropical Race 4. In 2023, the Philippine Space Agency and Bureau of Plant Industry utilized pest control against Fusarium oxysporum f.sp. cubense. In 2022, Philippines was second top banana exporter with Cavendish banana as the top variety.
Biology and health sciences
Tropical and tropical-like fruit
Plants
28871384
https://en.wikipedia.org/wiki/Free-ranging%20dog
Free-ranging dog
A free-ranging dog is a dog that is not confined to a yard or house. Free-ranging dogs include street dogs, village dogs, stray dogs, feral dogs, etc., and may be owned or unowned. The global dog population is estimated to be 900 million, of which around 20% are regarded as owned pets and therefore restrained. Free-ranging dogs are common in developing countries. It is estimated that there are about 62 million free-ranging dogs in India. In Western countries free-ranging dogs are rare; in Europe they are primarily found in parts of Eastern Europe, and, to a lesser extent, in parts of Southern Europe. Free-ranging dogs pose concerns about the spread of rabies, especially in regions of the world where the disease is endemic. Different policies exist around the world with regard to the management of free-ranging dogs, including trap–neuter–return, the permanent removal of dogs from the streets and their indefinite housing in animal shelters, their (national or international) adoption, or their euthanasia. Policies regarding stray dogs have been the object of ongoing controversy in recent decades. State governments, animal rights organizations, veterinarians and NGOs have been involved in managing free-ranging dogs around the world. Origin Dogs living with humans is a dynamic relationship, with a large proportion of the dog population losing contact with humans at some stage over time. This loss of contact first occurred after domestication and has reoccurred throughout history. The global dog population is estimated to be 900 million and rising. Although it is said that the "dog is man's best friend" for the 17–24% of dogs that live as pets in the developed countries, in the developing world pet dogs are uncommon but there are many village, community or feral dogs. Most of these dogs live out their lives as scavengers and have never been owned by humans, with one study showing their most common response when approached by strangers is to run away (52%) or respond aggressively (11%). Little is known about these dogs, or the dogs in developed countries that are feral, stray or that are in shelters, as the majority of modern research on dog cognition has focused on pet dogs living in human homes. Factors leading to stray dogs Stray dogs are dogs without an owner. While the term stray dog is sometimes used to refer specifically to dogs which have been lost, in a more general sense a stray dog is any unowned free-ranging dog. Four Paws defines stray animals as "those animals who are either born on the streets or have become homeless due to abandonment". Several factors lead to the existence of stray dogs. In some cases, the problem originates in the past, with the dogs having lived on the streets for many generations. Such dogs are born on the streets, having never been owned, and live in a feral or semi-feral state. Other stray dogs have been previously owned and ended on the streets because they were abandoned by their owners, either at birth (when the owners could not accommodate a litter) or at a later time, especially if the owners faced economic challenges, lifestyle changes, or health issues. Some owners abandon their working dogs if they are dissatisfied with their performance. Dogs can also end up as strays in cases of natural disasters, armed conflicts or other calamities. Categories of dogs There is confusion with the terms used to categorize dogs. Dogs can be classed by whether they possess an owner or a community of owners, how freely they can move around, and any genetic differences they have from other dog populations due to long-term separation. Owned dogs Owned dogs are "family" dogs. They have an identifiable owner, are commonly socialized, and are not allowed to roam. They are restricted to particular outdoor or indoor areas. They have little impact on wildlife unless going with humans into natural areas. Domestic dogs are all dog breeds (other than dingoes) selectively bred, kept and fed by humans. They can be pets, guard dogs, livestock guardian dogs or working dogs. Domestic dogs may also behave like wild dogs if they are not adequately controlled or are free roaming. Free-ranging owned dogs A free-ranging dog is a dog that is not confined to a yard or house. Free-ranging owned dogs are cared for by one owner or a community of owners, and are able to roam freely. This includes "village dogs", which live in rural areas and human habitations. These are not confined. However, they rarely leave the village vicinity. This also includes "rural free-ranging dogs", which also live in rural areas and human habitations. These are owned or are associated with homes, and they are not confined. These include farm and pastoral dogs that range over particular areas. Free-ranging unowned dogs Free-ranging unowned dogs are stray dogs. They get their food and shelter from human environments, but they have not been socialized and so they avoid humans as much as possible. Free-ranging unowned dogs include "street dogs", which live in cities and urban areas. These have no owner but are commensals, subsisting on left over food from human, garbage or other dogs' food as their primary food sources. Free-ranging unowned dogs also include feral dogs. Feral dogs The term "feral" can be used to describe those animals that have been through the process of domestication but have returned to a wild state. "Domesticated" and "socialized" (tamed) do not mean the same thing, as it is possible for an individual animal of a domesticated species to be feral and not tame, and it is possible for an individual animal of a wild species to be socialized to live with humans. Feral dogs differ from other dogs because they did not have close human contact early in their lives (socialization). Feral dogs live in a wild state with no food and shelter intentionally provided by humans and show a continuous and strong avoidance of direct human contact. The distinction between feral, stray, and free-ranging dogs is sometimes a matter of degree, and a dog may shift its status throughout its life. In some unlikely but observed cases, a feral dog that was not born wild but lived with a feral group can become rehabilitated to a domestic dog with an owner. A dog can become a stray when it escapes human control, by abandonment or being born to a stray mother. A stray dog can become feral when it is forced out of the human environment or when it is co-opted or socially accepted by a nearby feral group. Feralization occurs by the development of a fear response to humans. Feral dogs are not reproductively self-sustaining, suffer from high rates of juvenile mortality, and depend indirectly on humans for their food, their space, and the supply of co-optable individuals. "Wild" dogs The existence of "wild dogs" is debated. Some authors propose that this term applies to the Australian dingo and dingo-feral dog hybrids. They believe that these have a history of independence from humans and should no longer be considered as domesticated. Others disagree, and propose that the dingo was once domesticated and is now a feral dog. Queensland Department of Agriculture and Fisheries defines wild dogs as any dogs that are not domesticated, which includes dingoes, feral dogs and hybrids. Yearling wild dogs frequently disperse more than from the place where they were born. The first British colonists to arrive in Australia established a settlement at Port Jackson in 1788 and recorded dingoes living there with indigenous Australians. Although the dingo exists in the wild, it associates with humans but has not been selectively bred as have other domesticated animals. The dingo's relationship with indigenous Australians can be described as commensalism, in which two organisms live in close association but without depending on each other for survival. They will both hunt and sleep together. The dingo is therefore comfortable enough around humans to associate with them, but is still capable of living independently, much like the domestic cat. Any free-ranging unowned dog can be socialized to become an owned dog, as some dingoes do when they join human families. Another point of view regards domestication as a process that is difficult to define. It regards dogs as being either socialized and able to exist with humans, or unsocialized. There exist dogs that live with their human families but are unsocialized and will treat strangers aggressively and defensively as might a wild wolf. There also exists a number of cases where wild wolves have approached people in remote places, attempting to get them to play and to form companionship. Street dog Street dogs, known in scientific literature as free-ranging urban dogs, are unconfined dogs that live in cities. They live virtually everywhere cities exist and the local human population allows, especially in the developing world. Street dogs may be former pets that have strayed from or are abandoned by their owners, or may be feral animals that have never been owned. Street dogs may be stray purebreds, true mixed-breed dogs, or unbred landraces such as the Indian pariah dog. Street dog overpopulation can cause problems for the societies in which they live, so campaigns to spay and neuter them are sometimes implemented. They tend to differ from rural free-ranging dogs in their skill sets, socialization, and ecological effects. In Paraguay, in 2017, Diana Vicezar established a community-based organisation designed to tackle the issue of abandoned, unsheltered dogs, as well as plastic pollution. The scheme encouraged volunteers to build shelter for these dogs using recycled materials. By 2019 it had three international chapters, and had worked with 1000 people. Problems caused by street dogs Bites Street dogs generally avoid conflict with humans to survive. However, dog bites and attacks can occur for various reasons. Dogs might bite because they are scared, startled, feel threatened, or are protecting something valuable like their puppies, food, or toys. Bites can also happen if dogs are unwell due to illness or injury, are playing, or are experiencing hunger, thirst, abuse, or a lack of caretakers. Territorial instincts and predator instincts can also lead to bites. Rabies remains a significant issue in some countries. In India, where it is estimated that there are about 62 million free-ranging dogs, about 17,4 million animal bites occur annually, resulting in 20,565 human rabies deaths. Rabies is endemic in India, with the country accounting for 36% of the world’s rabies deaths. In addition to rabies, dog bites are also associated with other health risks, including Capnocytophaga canimorsus, MRSA, tetanus, Pasteurella Bergeyella zoohelcum, osteomyelitis, septic arthritis, tenosynovitis, giardia and leptospirosis; and therefore dog bites require immediate medical attention. After a dog bite, a tetanus vaccine is needed if the person has not been previously adequately vaccinated. Prophylactic antibiotics are also needed for high-risk wounds or people with immune deficiency. Deaths from dog bites are more common in low and middle income countries than in high-income countries. Quality of life The presence of stray dogs can significantly impact the quality of life for humans in several ways. Barking, howling, and dog fights can disturb people, especially at night. The smell of dog urine, a result of territory marking, can become pungent among un-spayed or neutered dogs, and the presence of feces can lead to sanitation issues and health risks such as toxocariasis. Additionally, the fear of dog bites and attacks can cause anxiety and affect people's mobility and outdoor activities. Conversely, stray dogs' quality of life is also greatly affected by their interactions with humans. Stray dogs often struggle with food and water scarcity, and they are vulnerable to abuse and neglect. Lack of medical care leads to untreated injuries and diseases. Urban environments can be harsh and stressful, and encounters with humans can result in fear, injuries, and displacement. Skills and adaptations Dogs are known to be a highly adaptive and intelligent species. Free-ranging dogs commonly form packs. To survive in modern cities, street dogs must be able to navigate traffic. Some of the stray dogs in Bucharest are seen crossing the large streets at pedestrian crosswalks. The dogs have probably noticed that when humans cross streets at such markings, cars tend to stop. The dogs have accustomed themselves to the flow of pedestrian and automobile traffic; they sit patiently with the people at the curb when they are stopped for a red light, and then cross with them as they have noticed how cars stop when a large number of people cross the road like that. In other countries, street dogs are said to have been observed to use subway and bus services. Behaviour Free-ranging dogs tend to be crepuscular animals, and are often inactive during daytime, especially during the heat of the summer. Free-ranging dogs commonly form packs. The dogs rest close to their resource sites in their territory, choosing a place that enables maximum visibility of the surroundings. For sleeping, they often choose locations in the core of the territory, preferring areas with shade. The dogs seek spaces which shelter them from harsh weather, and often rest or sleep under parked cars. Free-ranging dogs who have been in this state for generations have developed certain traits through natural selection in order to be able to survive in their respective environments. Wild dogs rest during the day, often not far from water, and their travel routes to and from resting or den sites may be well defined. They are usually timid and do not often stray into urban areas unless they are encouraged. Those with a recent domestic background or regular close contact with people may approach dwellings or people. Wild dogs are attracted to places where they can scavenge food, and deliberately or inadvertently feeding them can make them dependent on humans. Wild dingoes in remote areas live in packs, often of 3–12 animals, with a dominant (alpha) male and female controlling breeding. Packs establish territories which usually do not overlap. Wild dogs, particularly dingoes, visit the edge of their territory regularly. This checking of the boundaries is termed the dog's beat. Wild dogs are often heard howling during the breeding season which, for pure dingoes, occurs once a year. Hybrid dogs have two oestrus cycles each year, although they may not always successfully raise young in each cycle. After a nine-week gestation, four to six pups are born in a den that provides protection from the elements and other animals. Dens may be in soft ground under rocks, logs or other debris, or in logs or other hollows. Pups are suckled for 4–6 weeks and weaned at four months. They become independent of their parents when they are 6 weeks to 2 months old, with those becoming independent at the later time having a higher rate of survival. Increased food supplied by people also enables more pups to survive to maturity. Feeding habits According to Queensland Department of Agriculture and Fisheries, wild dogs can be found on grazing land, on the fringes of towns, in rural-residential estates, or in forests and woodlands—anywhere there is food, water and shelter. They will eat whatever is easiest to obtain when they are hungry, animal or vegetable matter. They will hunt for live prey, or will eat road-killed animals, dead livestock, and scraps from compost heaps or rubbish. They mostly take small prey such as rabbits, possums, rats, wallabies and bandicoots. When hunting in packs, they will take larger animals such as kangaroos, goats or the young of cattle and horses. Their choice of primary prey species depends on what is abundant and easy to catch. They usually hunt in the early morning and early evening, when they locate individual prey animals by sight, approach them silently, and then pursue them. Wild dogs that depend primarily on rubbish may remain in the immediate vicinity of the source, while those that depend on livestock or wild prey may travel up to . In a Perth study most of the 1400 dogs involved in livestock attacks were friendly and approachable family pets—very few were aggressive to people. Rabies impact In 2011, a media article on the stray dog population by the US National Animal Interest Alliance said that there were 200 million stray dogs worldwide and that a "rabies epidemic" was causing a global public health issue. In 2024, the World Health Organization reported that dog bites and scratches caused 99% of the human rabies cases, and that 40% of victims were children under 15. It also estimated that there were about 59,000 human deaths from rabies annually, most of them occurring in Asia and Africa. Rabies cases in recent years have occurred in Europe also. In 2012, in Romania, a 5-year-old girl died after she was bitten by a rabid stray dog. In the United States, although rabies is present primarily in the wildlife, in 2022, 50 dogs tested positive for rabies. In Africa, about 21,000–25,000 people die annually due to rabies. There have been debates about whether pre-exposure prophylaxis (PrEP) for rabies (preventative rabies vaccines) should be administered as part of routine vaccination schemes to children who live in areas where rabies is endemic and where there are many free-ranging dogs. While PrEP does not eliminate the need for post-exposure prophylaxis (PEP) (the life-saving treatment needed after being bitten by a potentially rabid animal), PrEP simplifies the post-exposure prophylaxis treatment needed. Some tourists from Western countries who travel abroad may not be aware of the rabies risk which exists in the countries they visit. In 2019, a woman from Norway died of rabies after she contracted the virus while on holiday in the Philippines, where she was bitten by a stray puppy that she and her friends had rescued. Conservation impact Large numbers of free-ranging dogs can pose a threat to wildlife. Dogs have contributed to 11 vertebrate extinctions, and are a known or potential threat to 188 threatened species worldwide: 96 mammal (33 families), 78 bird (25 families), 22 reptile (10 families) and three amphibian (three families) species. In an urban environment, free-ranging dogs are often apex predators. Increasing numbers of free-ranging dogs have become a threat to the snow leopard and young brown bears on the Tibetan Plateau because dog packs chase these animals away from food. Free-ranging dogs are often vectors of zoonotic diseases such as rabies, toxocariasis, heartworm, leptospirosis, Capnocytophaga, bordetellosis, and echinococcosis that can be spread to humans, and can also spread canine distemper, canine adenovirus, parvovirus and parainfluenza, which can infect other dogs and also jump into species such as African wild dogs, wolves, lions and tigers. In addition, they can interbreed with other members of the genus Canis such as the gray wolf, the Ethiopian wolf and the dingo, alongside those outside the genus such as the pampas fox, raising genetic purity concerns. In a study conducted in 2018-2020, a wolf-dog hybrid was discovered in the Southern Carpathian forests of Romania. The study found that although this discovery may presently seem insignificant, it could pose a threat to the genetic integrity of the wolf population in the long term, and it advised the studying of the problem of stray dogs entering the habitat of wolves. Free-ranging urban dogs by country South Asia Afghanistan A group of stray dogs became famous in Afghanistan after confronting a suicide bomber, preventing fifty American soldiers from being killed. However, one of the surviving dogs, Target, was mistakenly euthanized when she was brought to the United States. Bhutan In October 2023, Bhutan achieved 100% sterilization of its free-roaming dogs. A nationwide sterilization initiative was carried out under the Nationwide Accelerated Dog Population Management and Rabies Control Program (NADPM&RCP) by the government. The program to manage stray dogs started in 2009 and multiple phases were carried out to achieve 100% sterilization. Stray dogs are feared in Bhutan when they move around in packs. Dog bites are of concern in almost all cities. In May 2022, six feral stray dogs mauled and killed a seven-year-old girl in Genekha. Stray dogs have also historically poised a problem for tourists in Bhutan, who have complained about the disturbance caused by nightly howls. An ear notch indicates a dog has been sterilized and vaccinated. India Due to the collapse of vulture populations in India, which formerly consumed large quantities of dead animal carcasses and terminated certain pathogens from the food chain, India's urban street dog populations have exploded and become a health hazard. Mumbai, for example, has over 12 million human residents, over half of whom are slum-dwellers. At least five hundred tons of garbage remain uncollected daily. Therefore, conditions are perfect for supporting a particularly large population of stray dogs. In 2001, a law was passed in India making the killing of stray dogs illegal. Contrary to misconceptions that this law exacerbated problems, evidence shows that humane methods, such as vaccinating and sterilizing dogs, are more effective in controlling the street dog population and reducing rabies cases. For instance, World Animal Protection highlighted how Mexico eliminated human rabies through mass dog vaccination. Similarly, initiatives like World Veterinary Service's (WVS) Mission Rabies have successfully vaccinated and sterilized 70% of dogs in Goa, making it the first state in India to become rabies-free. These approaches align with global health recommendations, emphasizing vaccination and sterilization over culling to effectively manage rabies and street dog populations. Pakistan In Pakistan, several dog breeds exist including the Gaddi Kutta, Indian pariah dog, Bully Kutta, among others. In the city of Lahore, the Public Health Department launched a campaign to kill 5,000 stray dogs. In 2009, 27,576 dogs were killed within the city of Lahore; in 2005, this number was 34,942. In 2012, after 900 dogs were killed in the city of Multan, the Animal Safety Organisation in Pakistan sent a letter to Chief Minister (CM) Shahbaz Sharif recommending that "stray dogs be vaccinated rather than killed." Sri Lanka In Sri Lanka, there is a No-Kill Policy for street dogs, hence neutering and vaccinating are encouraged. Despite the proposal for an updated Animal Welfare Act, century-old law against animal cruelty still exist, so they are subjected to cruelty in various forms. Europe Bulgaria There is a number of street dogs in Sofia, the capital of Bulgaria. The number of street dogs in Bulgaria has been reduced in recent years. While in 2007 there were 11,124 street dogs in Sofia, the number dropped to 3,589 in 2018. Greece There are stray dogs in Greece. In 2017, a British woman who was a tourist was mauled to death by a pack of stray dogs. Italy Around 80% of abandoned dogs die early due to lack of survival skills. Stray dogs are primarily found in Southern Italy. Moldova In 2023, it was estimated that there were about 5.000 stray dogs on the streets of Chișinău, Moldova's capital. During the first half of 2024, 791 people in Chișinău were bitten by stray dogs. Portugal The 2023 National Census of Stray Animals found that there were 101,015 stray dogs in Portugal. Romania In Romania, free-ranging urban dogs (called in Romanian câini maidanezi, literally "wasteland dogs", câini comunitari "community dogs", etc.) have been a huge problem in recent decades, especially in larger cities, with many people being bitten by dogs. The problem originates primarily in the systematization programme that took place in Communist Romania in the 1970s and 1980s under Nicolae Ceaușescu, who enacted a mass programme of demolition and reconstruction of existing villages, towns, and cities, in whole or in part, in order to build standardized blocks of flats (blocuri). The dogs from the yards of the demolished houses were abandoned on the streets, and reproduced, multiplying their numbers throughout the years. Estimations for Bucharest vary widely, but the number of stray dogs has been reduced drastically in 2014, after the death of a 4-year-old child in 2013 who was attacked by a dog. The Bucharest City Hall stated that over 51,200 stray dogs were captured from October 2013 to January 2015, with more than half being euthanized, about 23,000 being adopted, and 2,000 still residing in the municipality's shelters. Although the number of stray dogs in Romania has been reduced significantly during the past 15 years, there have been recent fatal incidents, including in 2022, when a man was mauled to death by a pack of 15-20 stray dogs in Bacău County, and in 2023, when a woman who was jogging in a field near Lacul Morii in Ilfov County was attacked and killed by stray dogs. Many stray dogs in Romania are adopted abroad, with the most common receiving countries being Germany, the United Kingdom, the Netherlands and Belgium. Russia Stray dogs are very common in Russia. They are found both in the countryside and in urban areas. In Russia, street dogs are accepted by the common people and are even fed by the local population, including in the capital city of Moscow. However, capturing of stray dogs by doghunters' vans and being culled has been documented since around 1900. The number of street dogs in Moscow is estimated to be up to 50,000 animals. Their sad lot was dramatized by Anton Chekhov in the famous short story Kashtanka, by Mikhail Bulgakov in the novella Heart of a Dog, and by Gavriil Troyepolsky in the novel White Bim Black Ear. When the number of street dogs massively increased in the 1990s and in the beginning of the new millennium it came to many attacks on human, the dogs were captured and killed. In recent years the attitude and strategy towards street dogs has changed. The dogs are caught, sterilized and it is ensured that the dogs have enough to eat. The dogs keep the city free of food leftovers and rats. Since 2002 in Moscow there exists a monument dedicated to the stray dog called Malchik (Eng: "Little boy"). Stray dogs in Moscow have adapted their behavior to traffic and the life of Moscow. The dogs even ride the metro and understand the rules of traffic lights and are often called Moscow's metro dogs. Serbia It is estimated, as of 2024, that there are 400,000 free-ranging dogs in Serbia. These dogs are found both in urban and rural areas. In 2011, the largest groups of urban free-ranging dogs were found in Belgrade (more than 17,000), Novi Sad (about 10,000), Niš (between 7,000 and 10,000), Subotica (about 8,000) and Kragujevac (about 5,000). Turkey While many developing countries harbor high numbers of stray dogs as a result of neglect, Turkey’s problem is a little different. In 2004, Turkish government passed a law requiring local officials to rehabilitate rather than annihilate stray dogs. The Animal Protection Law No. 5199 states a no kill, no capture policy, and unlawful euthanization are prosecutable offenses. It requires animals to be sterilized, vaccinated, and taken back to the place where they were found. Another reason for the increase in stray dog numbers is that it is easier to adopt a dog in Turkey than in many other nations. Even "dangerous breeds" could be homed before the "dangerous dogs" bill was passed at the beginning of 2022. Still, this means the vetting process for dog ownership is not extensive. There is no real punishment for discarding dogs to streets. Istanbul, the most populous city of the country, is home to one of the highest concentrations of stray animals, with an estimated 400,000 to 600,000 dogs roaming the streets. In total, it is estimated that 3 to 10 millions of stray dogs live in Turkey and expected to rise up to 60 million in 10 years. North America United States Each year, approximately 2.7 million dogs and cats are euthanized because shelters are too full and there are not enough adoptive homes. In 2016, between 592,255 and 866,366 street dogs were euthanized in the US. In Detroit, it was estimated that there were about 50,000 stray dogs in 2013. Puerto Rico In Puerto Rico, street dogs (and cats) are known as . In the late 1990s it was estimated there were 50,000 street dogs in the U.S. territory. By 2018 there were around 300,000 stray dogs in Puerto Rico. Programs to address the problem have been launched by the Humane Society of Puerto Rico and others. In 2018, a non-profit organization called Sato Project launched its first "spayathon", a large-scale project to spay and neuter of Puerto Rico. Other initiatives include having mainland U.S. residents adopt the island dogs. Latin America Free-ranging dogs are common in Latin America. There are about 16 million free-ranging dogs in Mexico, 6 million in Peru and 4 million in Colombia. South-East Asia Philippines Locally known as Askals, street dogs in the Philippines, while sometimes exhibiting mixing with breed dogs from elsewhere, are generally native unbred mongrel dogs. Thailand Management Given the problems associated with free-ranging dogs, including spread of diseases (especially rabies, with dog bites and scratches being responsible for 99% of the global human rabies cases), attacks on humans or other animals, and increased risk of road accidents, many places where there are free-ranging dogs have developed strategies to manage such dogs. Common approaches include the "Trap–neuter–return" approach (similar to the concept applied to feral cats) where the dogs are sterilized and, if possible, vaccinated, and then returned to the streets; or conversely, an opposite approach where the free-ranging dogs are permanently removed from the streets, by housing them indefinitely in animal shelters, giving them to adoption (including international adoption) or euthanizing them. The latter is controversial, but practiced in many countries; in the United States, every year, about 390,000 dogs in shelters are euthanized. The prevention of rabies is a major goal of policies dealing with stray dogs. Mass rabies vaccination of stray dogs can be successful, provided at least 70% of stray dogs in a community are vaccinated, in order to achieve herd immunity. However, rabies vaccination of stray dogs is complex, and there are challenges to successfully managing and delivering such vaccination. A policy of "Catch-Neuter-Vaccinate-Return", where the stray dogs are captured, sterilized, vaccinated and then released back on the street, is advocated by animal rights organizations such as Four Paws. However, where this cannot be achieved, a simplified and cheaper version of only sterilizing the dogs is adopted, which helps reduce their numbers in time, but slows down the rabies eradication efforts. Policies such as oral vaccination of stray dogs (similar to the policies applied to rabies control among wildlife) have also been proposed. There have been campaigns to educate tourists about their interaction with free-ranging dogs. This includes the necessity of getting prophylactic vaccines when traveling to some areas, the immediate seeking of medical care after being bitten by dogs, in order to prevent diseases such a rabies, tetanus and infections, and exercising caution around stray dogs, especially when they are in packs. Lack of awareness of health hazards associated with free-ranging dogs can result in injury and even death among tourists. Since 1990, over 80 American tourists have died from rabies after being exposed while traveling abroad. The WHO and international veterinary organizations have expressed their concerns about a possible rabies outbreak in Europe due to the war in Ukraine. According to Four Paws, before the war there were about 200,000 stray dogs in Ukraine, but by 2024, the number is estimated to have reached about a million. In 2022, in the UK, the Department for Environment, Food and Rural Affairs enacted a temporary ban on importing dogs from from Ukraine, Belarus, Romania and Poland. The ban was lifted on 29 October 2022, with new tighter animal health regulations entering into force. The UK is a rabies free jurisdiction, although a rabies-like lyssavirus, called European bat lyssavirus 2, exists in bats, and in 2002 a bat handler died due to this virus. Although there is no rabies in indigenous dogs in the UK, there have been cases of people dying of rabies in the UK in the 21st century, after contracting the disease abroad, with the most recent case occurring in 2012, when a woman died in London after having been bitten by a rabid dog in South Asia.
Biology and health sciences
Dogs
Animals
3790398
https://en.wikipedia.org/wiki/Megamaser
Megamaser
A megamaser is a type of astrophysical maser, which is a naturally occurring source of stimulated spectral line emission. Megamasers are distinguished from other astrophysical masers by their large isotropic luminosity. Megamasers have typical luminosities of 103 solar luminosities (), which is 100 million times brighter than masers in the Milky Way, hence the prefix mega. Likewise, the term kilomaser is used to describe masers outside the Milky Way that have luminosities of order , or thousands of times stronger than the average maser in the Milky Way, gigamaser is used to describe masers billions of times stronger than the average maser in the Milky Way, and extragalactic maser encompasses all masers found outside the Milky Way. Most known extragalactic masers are megamasers, and the majority of megamasers are hydroxyl (OH) megamasers, meaning the spectral line being amplified is one due to a transition in the hydroxyl molecule. There are known megamasers for three other molecules: water (H2O), formaldehyde (H2CO), and methine (CH). Water megamasers were the first type of megamaser discovered. The first water megamaser was found in 1979 in NGC 4945, a galaxy in the nearby Centaurus A/M83 Group. The first hydroxyl megamaser was found in 1982 in Arp 220, which is the nearest ultraluminous infrared galaxy to the Milky Way. All subsequent OH megamasers that have been discovered are also in luminous infrared galaxies, and there are a small number of OH kilomasers hosted in galaxies with lower infrared luminosities. Most luminous infrared galaxies have recently merged or interacted with another galaxy, and are undergoing a burst of star formation. Many of the characteristics of the emission in hydroxyl megamasers are distinct from that of hydroxyl masers within the Milky Way, including the amplification of background radiation and the ratio of hydroxyl lines at different frequencies. The population inversion in hydroxyl molecules is produced by far infrared radiation that results from absorption and re-emission of light from forming stars by surrounding interstellar dust. Zeeman splitting of hydroxyl megamaser lines may be used to measure magnetic fields in the masing regions, and this application represents the first detection of Zeeman splitting in a galaxy other than the Milky Way. Water megamasers and kilomasers are found primarily associated with active galactic nuclei, while galactic and weaker extragalactic water masers are found in star forming regions. Despite different environments, the circumstances that produce extragalactic water masers do not seem to be very different from those that produce galactic water masers. Observations of water megamasers have been used to make accurate measurements of distances to galaxies in order to provide constraints on the Hubble constant. Background Masers The word maser derives from the acronym MASER, which stands for "Microwave Amplification by Stimulated Emission of Radiation". The maser is a predecessor to lasers, which operate at optical wavelengths, and is named by the replacement of "microwave" with "light". Given a system of atoms or molecules, each with different energy states, an atom or molecule may absorb a photon and move to a higher energy level, or the photon may stimulate emission of another photon of the same energy and cause a transition to a lower energy level. Producing a maser requires population inversion, which is when a system has more members in a higher energy level relative to a lower energy level. In such a situation, more photons will be produced by stimulated emission than will be absorbed. Such a system is not in thermal equilibrium, and as such requires special conditions to occur. Specifically, it must have some energy source that can pump the atoms or molecules to the excited state. Once population inversion occurs, a photon with a photon energy corresponding to the energy difference between two states can then produce stimulated emission of another photon of the same energy. The atom or molecule will drop to the lower energy level, and there will be two photons of the same energy, where before there was only one. The repetition of this process is what leads to amplification, and since all of the photons are the same energy, the light produced is monochromatic. Astrophysical masers Masers and lasers built on Earth and masers that occur in space both require population inversion in order to operate, but the conditions under which population inversion occurs are very different in the two cases. Masers in laboratories have systems with high densities, which limits the transitions that may be used for masing, and requires using a resonant cavity in order to bounce light back and forth many times. Astrophysical masers are at low densities, and naturally have very long path lengths. At low densities, being out of thermal equilibrium is more easily achieved because thermal equilibrium is maintained by collisions, meaning population inversion can occur. Long path lengths provide photons traveling through the medium many opportunities to stimulate emission, and produce amplification of a background source of radiation. These factors accumulate to "make interstellar space a natural environment for maser operation." Astrophysical masers may be pumped either radiatively or collisionally. In radiative pumping, infrared photons with higher energies than the maser transition photons preferentially excite atoms and molecules to the upper state in the maser in order to produce population inversion. In collisional pumping, this population inversion is instead produced by collisions that excite molecules to energy levels above that of the upper maser level, and then the molecule decays to the upper maser level by emitting photons. History In 1965, twelve years after the first maser was built in a laboratory, a hydroxyl (OH) maser was discovered in the plane of the Milky Way. Masers of other molecules were discovered in the Milky Way in the following years, including water (H2O), silicon monoxide (SiO), and methanol (CH3OH). The typical isotropic luminosity for these galactic masers is . The first evidence for extragalactic masing was detection of the hydroxyl molecule in NGC 253 in 1973, and was roughly ten times more luminous than galactic masers. In 1982, the first megamaser was discovered in the ultraluminous infrared galaxy Arp 220. The luminosity of the source, assuming it emits isotropically, is roughly . This luminosity is roughly one hundred million times stronger than the typical maser found in the Milky Way, and so the maser source in Arp 220 was called a megamaser. At this time, extragalactic water (H2O) masers were already known. In 1984, water maser emission was discovered in NGC 4258 and NGC 1068 that was of comparable strength to the hydroxyl maser in Arp 220, and are as such considered water megamasers. Over the next decade, megamasers were also discovered for formaldehyde (H2CO) and methine (CH). Galactic formaldehyde masers are relatively rare, and more formaldehyde megamasers are known than are galactic formaldehyde masers. Methine masers, on the other hand, are quite common in the Milky Way. Both types of megamaser were found in galaxies in which hydroxyl had been detected. Methine is seen in galaxies with hydroxyl absorption, while formaldehyde is found in galaxies with hydroxyl absorption as well as those with hydroxyl megamaser emission. As of 2007, 109 hydroxyl megamaser sources were known, up to a redshift of . Over 100 extragalactic water masers are known, and of these, 65 are bright enough to be considered megamasers. General requirements Regardless of the masing molecule, there are a few requirements that must be met for a strong maser source to exist. One requirement is a radio continuum background source to provide the radiation amplified by the maser, as all maser transitions take place at radio wavelengths. The masing molecule must have a pumping mechanism to create the population inversion, and sufficient density and path length for significant amplification to take place. These combine to constrain when and where megamaser emission for a given molecule will take place. The specific conditions for each molecule known to produce megamasers are different, as exemplified by the fact that there is no known galaxy that hosts both of the two most common megamaser species, hydroxyl and water. As such, the different molecules with known megamasers will be addressed individually. Hydroxyl megamasers Arp 220 hosts the first megamaser discovered, is the nearest ultraluminous infrared galaxy, and has been studied in great detail at many wavelengths. For this reason, it is the prototype of hydroxyl megamaser host galaxies, and is often used as a guide for interpreting other hydroxyl megamasers and their hosts. Hosts and environment Hydroxyl megamasers are found in the nuclear region of a class of galaxies called luminous infrared galaxies (LIRGs), with far-infrared luminosities in excess of one hundred billion solar luminosities, or LFIR > , and ultra-luminous infrared galaxies (ULIRGs), with LFIR > are favored. These infrared luminosities are very large, but in many cases LIRGs are not particularly luminous in visible light. For instance, the ratio of infrared luminosity to luminosity in blue light is roughly 80 for Arp 220, the first source in which a megamaser was observed. The majority of the LIRGs show evidence of interaction with other galaxies or having recently experienced a galaxy merger, and the same holds true for the LIRGs that host hydroxyl megamasers. Megamaser hosts are rich in molecular gas compared to spiral galaxies, with molecular hydrogen masses in excess of one billion solar masses, or H2 > . Mergers help funnel molecular gas to the nuclear region of the LIRG, producing high molecular densities and stimulating high star formation rates characteristic of LIRGs. The starlight in turn heats dust, which re-radiates in the far infrared and produces the high LFIR observed in hydroxyl megamaser hosts. The dust temperatures derived from far-infrared fluxes are warm relative to spirals, ranging from 40–90 K. The far-infrared luminosity and dust temperature of a LIRG both affect the likelihood of hosting a hydroxyl megamaser, through correlations between the dust temperature and far-infrared luminosity, so it is unclear from observations alone what the role of each is in producing hydroxyl megamasers. LIRGs with warmer dust are more likely to host hydroxyl megamasers, as are ULIRGs, with LFIR > . At least one out of three ULIRGs hosts a hydroxyl megamaser, as compared with roughly one out of six LIRGs. Early observations of hydroxyl megamasers indicated a correlation between the isotropic hydroxyl luminosity and far-infrared luminosity, with LOH LFIR2. As more hydroxyl megamasers were discovered, and care was taken to account for the Malmquist bias, this observed relationship was found to be flatter, with LOH LFIR1.20.1. Early spectral classification of the nuclei of the LIRGs that host hydroxyl megamasers indicated that the properties of LIRGs that host hydroxyl megamasers cannot be distinguished from the overall population of LIRGs. Roughly one third of megamaser hosts are classified as starburst galaxies, one quarter are classified as Seyfert 2 galaxies, and the remainder are classified as low-ionization nuclear emission-line regions, or LINERs. The optical properties of hydroxyl megamaser hosts and non-hosts are not significantly different. Recent infrared observations using the Spitzer Space Telescope are, however, able to distinguish hydroxyl megamaser hosts galaxies from non-masing LIRGs, as 10–25% of hydroxyl megamaser hosts show evidence for an active galactic nucleus, compared to 50–95% for non-masing LIRGs. The LIRGs that host hydroxyl megamasers may be distinguished from the general population of LIRGs by their molecular gas content. The majority of molecular gas is molecular hydrogen, and typical hydroxyl megamaser hosts have molecular gas densities greater than 1000 cm−3. These densities are among the highest mean densities of molecular gas among LIRGs. The LIRGs that host hydroxyl megamasers also have high fractions of dense gas relative to typical LIRGs. The dense gas fraction is measured by the ratio of the luminosity produced by hydrogen cyanide (HCN) relative to the luminosity of carbon monoxide (CO). Line characteristics The emission of hydroxyl megamasers occurs predominantly in the so-called "main lines" at 1665 and 1667 MHz. The hydroxyl molecule also has two "satellite lines" that emit at 1612 and 1720 MHz, but few hydroxyl megamasers have had satellite lines detected. Emission in all known hydroxyl megamasers is stronger in the 1667 MHz line; typical ratios of the flux in the 1667 MHz line to the 1665 MHz line, called the hyperfine ratio, range from a minimum of 2 to greater than 20. For hydroxyl emitting in thermodynamic equilibrium, this ratio will range from 1.8 to 1, depending upon the optical depth, so line ratios greater than 2 are indicative of a population out of thermal equilibrium. This may be compared with galactic hydroxyl masers in star-forming regions, where the 1665 MHz line is typically strongest, and hydroxyl masers around evolved stars, in which the 1612 MHz line is often strongest, and of the main lines, 1667 MHz emission is frequently stronger than 1612 MHz. The total width of emission at a given frequency is typically many hundreds of kilometers per second, and individual features that make up the total emission profile have widths ranging from tens to hundreds of kilometers per second. These may also be compared with galactic hydroxyl masers, which typically have linewidths of order a kilometer per second or narrower, and are spread over a velocity of a few to tens of kilometers per second. The radiation amplified by hydroxyl masers is the radio continuum of its host. This continuum is primarily composed of synchrotron radiation produced by Type II supernovae. Amplification of this background is low, with amplification factors, or gains, ranging from a few percent to a few hundred percent, and sources with larger hyperfine ratios typically exhibiting larger gains. Sources with higher gains typically have narrower emission lines. This is expected if the pre-gain linewidths are all roughly the same, as line centers are amplified more than the wings, leading to line narrowing. A few hydroxyl megamasers, including Arp 220, have been observed with very long baseline interferometry (VLBI), which allows sources to be studied at higher angular resolution. VLBI observations indicate that hydroxyl megamaser emission is composed of two components, one diffuse and one compact. The diffuse component displays gains of less than a factor of one and linewidths of order hundreds of kilometers per second. These characteristics are similar to those seen with single dish observations of hydroxyl megamasers that are unable to resolve individual masing components. The compact components have high gains, ranging from tens to hundreds, high ratios of flux at 1667 MHz to flux at 1665 MHz, and linewidths are of order a few kilometers per second. These general features have been explained by a narrow circumnuclear ring of material from which the diffuse emission arises, and individual masing clouds with sizes of order one parsec that give rise to the compact emission. The hydroxyl masers observed in the Milky Way more closely resemble the compact hydroxyl megamaser components. There are, however, some regions of extended galactic maser emission from other molecules that resemble the diffuse component of hydroxyl megamasers. Pumping mechanism The observed relationship between the luminosity of the hydroxyl line and the far infrared suggests that hydroxyl megamasers are radiatively pumped. Initial VLBI measurements of nearby hydroxyl megamasers seemed to present a problem with this model for compact emission components of hydroxyl megamasers, as they required a very high fraction of infrared photons to be absorbed by hydroxyl and lead to a maser photon being emitted, making collisional excitation a more plausible pumping mechanism. However, a model of maser emission with a clumpy masing medium appear to be able to reproduce the observed properties of compact and diffuse hydroxyl emission. A recent detailed treatment finds that photons with a wavelength of 53 micrometres are the primary pump for main line maser emission, and applies to all hydroxyl masers. In order to provide enough photons at this wavelength, the interstellar dust that reprocesses stellar radiation to infrared wavelengths must have a temperature of at least 45 kelvins. Recent observations with the Spitzer Space Telescope confirm this basic picture, but there are still some discrepancies between details of the model and observations of hydroxyl megamaser host galaxies such as the required dust opacity for megamaser emission. Applications Hydroxyl megamasers occur in the nuclear regions of LIRGs, and appear to be a marker in the stage of the formation of galaxies. As hydroxyl emission is not subject to extinction by interstellar dust in its host LIRG, hydroxyl masers may be useful probes of the conditions where star formation in LIRGs takes place. At redshifts of z ~ 2, there are LIRG-like galaxies more luminous than the ones in the nearby universe. The observed relationship between the hydroxyl luminosity and far-infrared luminosity suggests that hydroxyl megamasers in such galaxies may be tens to hundreds of times more luminous than observed hydroxyl megamasers. Detection of hydroxyl megamasers in such galaxies would allow precise determination of the redshift, and aid understanding of star formation in these objects. The first detection of the Zeeman effect in another galaxy was made through observations of hydroxyl megamasers. The Zeeman effect is the splitting of a spectral line due to the presence of a magnetic field, and the size of the splitting is linearly proportional to the line-of-sight magnetic field strength. Zeeman splitting has been detected in five hydroxyl megamasers, and the typical strength of a detected field is of order a few milligauss, similar to the field strengths measured in galactic hydroxyl masers. Water megamasers Whereas hydroxyl megamasers seem to be fundamentally distinct in some ways from galactic hydroxyl masers, water megamasers do not seem to require conditions too dissimilar from galactic water masers. Water masers stronger than galactic water masers, some of which are strong enough to be classified "mega" masers, may be described by the same luminosity function as galactic water masers. Some extragalactic water masers occur in star forming regions, like galactic water masers, while stronger water masers are found in the circumnuclear regions around active galactic nuclei (AGN). The isotropic luminosities of these span a range of order one to a few hundred , and are found in nearby galaxies like Messier 51 () and more distant galaxies like NGC 4258 (). Line characteristics and pumping mechanism Water maser emission is observed primarily at 22 GHz, due to a transition between rotational energy levels in the water molecule. The upper state is at an energy corresponding to 643 kelvins about the ground state, and populating this upper maser level requires number densities of molecular hydrogen of order 108 cm−3 or greater and temperatures of at least 300 kelvins. The water molecule comes into thermal equilibrium at molecular hydrogen number densities of roughly 1011 cm−3, so this places an upper limit on the number density in a water masing region. Water masers emission has been successfully modelled by masers occurring behind shock waves propagating through dense regions in the interstellar medium. These shocks produce the high number densities and temperatures (relative to typical conditions in the interstellar medium) required for maser emission, and are successful in explaining observed masers. Applications Water megamasers may be used to provide accurate distance determinations to distant galaxies. Assuming a Keplerian orbit, measuring the centripetal acceleration and velocity of water maser spots yields the physical diameter subtended by the maser spots. By then comparing the physical radius to the angular diameter measured on the sky, the distance to the maser may be determined. This method is effective with water megamasers because they occur in a small region around an AGN, and have narrow linewidths. This method of measuring distances is being used to provide an independent measure of the Hubble constant that does not rely upon use of standard candles. The method is limited, however, by the small number of water megamasers known at distances within the Hubble flow. This distance measurement also provides a measurement of the mass of the central object, which in this case is a supermassive black hole. Black hole mass measurements using water megamasers is the most accurate method of mass determination for black holes in galaxies other than the Milky Way. The black hole masses that are measured are consistent with the M–sigma relation, an empirical correlation between stellar velocity dispersion in galactic bulges and the mass of the central supermassive black hole.
Physical sciences
Radio astronomy
Astronomy
20303498
https://en.wikipedia.org/wiki/Spinning%20%28polymers%29
Spinning (polymers)
Spinning is a manufacturing process for creating polymer fibers. It is a specialized form of extrusion that uses a spinneret to form multiple continuous filaments. Melt spinning If the polymer is a thermoplastic then it can undergo melt spinning. The molten polymer is extruded through a spinneret composed of capillaries where the resulting filament is solidified by cooling. Nylon, olefin, polyester, saran, and sulfar are produced via this process. Extrusion spinning Pellets or granules of the solid polymer are fed into an extruder. The pellets are compressed, heated and melted by an extrusion screw, then fed to a spinning pump and into the spinneret. Direct spinning The direct spinning process avoids the stage of solid polymer pellets. The polymer melt is produced from the raw materials, and then from the polymer finisher directly pumped to the spinning mill. Direct spinning is mainly applied during production of polyester fibers and filaments and is dedicated to high production capacity (>100 ton/day). Solution spinning If the melting point of the polymer is higher than its degradation temperature, the polymer must undergo solution spinning techniques for fiber formation. The polymer is first dissolved in a solvent, forming a spinning solution (sometimes called a "dope"). The spinning solution then undergoes dry, wet, dry-jet wet, gel, or electrospinning techniques. Dry spinning A spinning solution consisting of polymer and a volatile solvent is extruded through a spinneret into an evaporating chamber. A stream of hot air impinges on the jets of spinning solution emerging from the spinneret, evaporating the solvent, and solidifying the filaments. Solution blow spinning is a similar technique where polymer solution is sprayed directly onto a target to produce a nonwoven fiber mat. Wet spinning Wet spinning is the oldest of the five processes. The polymer is dissolved in a spinning solvent where it is extruded out through a spinneret submerged in a coagulation bath composed of nonsolvents. The coagulation bath causes the polymer to precipitate in fiber form. Acrylic, rayon, aramid, modacrylic, and spandex are produced via this process. A variant of wet spinning is dry-jet wet spinning, where the spinning solution passes through an air-gap prior to being submerged into the coagulation bath. This method is used in Lyocell spinning of dissolved cellulose, and can lead to higher polymer orientation due to the higher stretchability of the spinning solution versus the precipitated fiber. Gel spinning Gel spinning, also known as semi-melt spinning, is used to obtain high strength or other special properties in the fibers. Instead of wet spinning, which relies on precipitation as the main mechanism for solidification, gel spinning relies on temperature-induced physical gelation as the primary method for solidification. The resulting gelled fiber is then swollen with the spinning solvent (similar to gelatin desserts) which keeps the polymer chains somewhat bound together, resisting relaxation which is prevalent in wet spinning. The high solvent retention allows for ultra-high drawing as with ultra high molecular weight polyethylene (UHMWPE) (e.g., Spectra®) to produce fibers with a high degree of orientation, which increases fiber strength. The fibers are first cooled either with air or in a liquid bath to induce gelation, then the solvent is removed through ageing in a nonsolvent, or during the drawing stage. Some high strength polyethylene and polyacrylonitrile fibers are produced via this process. Electrospinning Electrospinning uses an electrical charge to draw very fine (typically on the micro or nano scale) fibres from a liquid - either a polymer solution or a polymer melt. Electrospinning shares characteristics of both electrospraying and conventional solution dry spinning of fibers. The process does not require the use of coagulation chemistry or high temperatures to produce solid threads from solution. This makes the process particularly suited to the production of fibers using large and complex molecules. Melt electrospinning is also practiced; this method ensures that no solvent can be carried over into the final product. Post-spin processes Drawing Finally, the fibers are drawn to increase strength and orientation. This may be done while the polymer is still solidifying or after it has completely cooled.
Technology
Spinning
null
20305069
https://en.wikipedia.org/wiki/Graphite%20oxide
Graphite oxide
Graphite oxide (GO), formerly called graphitic oxide or graphitic acid, is a compound of carbon, oxygen, and hydrogen in variable ratios, obtained by treating graphite with strong oxidizers and acids for resolving of extra metals. The maximally oxidized bulk product is a yellow solid with C:O ratio between 2.1 and 2.9, that retains the layer structure of graphite but with a much larger and irregular spacing. The bulk material spontaneously disperses in basic solutions or can be dispersed by sonication in polar solvents to yield monomolecular sheets, known as graphene oxide by analogy to graphene, the single-layer form of graphite. Graphene oxide sheets have been used to prepare strong paper-like materials, membranes, thin films, and composite materials. Initially, graphene oxide attracted substantial interest as a possible intermediate for the manufacture of graphene. The graphene obtained by reduction of graphene oxide still has many chemical and structural defects which is a problem for some applications but an advantage for some others. History and preparation Graphite oxide was first prepared by Oxford chemist Benjamin C. Brodie in 1859 by treating graphite with a mixture of potassium chlorate and fuming nitric acid. He reported synthesis of "paper-like foils" with 0.05 mm thickness. In 1957 Hummers and Offeman developed a safer, quicker, and more efficient process called Hummers' method, using a mixture of sulfuric acid H2SO4, sodium nitrate NaNO3, and potassium permanganate KMnO4, which is still widely used, often with some modifications. Largest monolayer GO with highly intact carbon framework and minimal residual impurity concentrations can be synthesized in inert containers using highly pure reactants and solvents. Graphite oxides demonstrate considerable variation of properties depending on the degree of oxidation and the synthesis method. For example, the temperature point of explosive exfoliation is generally higher for graphite oxide prepared by the Brodie method compared to Hummers graphite oxide, the difference is up to 100 degrees with the same heating rates. Hydration and solvation properties of Brodie and Hummers graphite oxides are also remarkably different. Recently a mixture of H2SO4 and KMnO4 has been used to cut open carbon nanotubes lengthwise, resulting in microscopic flat ribbons of graphene, a few atoms wide, with the edges "capped" by oxygen atoms (=O) or hydroxyl groups (-OH). Graphite (graphene) oxide has also been prepared by using a "bottom-up" synthesis method (Tang-Lau method) in which the sole source is glucose, the process is safer, simpler, and more environmentally friendly compared to traditionally "top-down" method, in which strong oxidizers are involved. Another important advantage of the Tang-Lau method is the control of thickness, ranging from monolayer to multilayers, by adjusting growth parameters. Structure The structure and properties of graphite oxide depend on the particular synthesis method and degree of oxidation. It typically preserves the layer structure of the parent graphite, but the layers are buckled and the interlayer spacing is about two times larger (~0.7 nm) than that of graphite. Strictly speaking "oxide" is an incorrect but historically established name. Besides epoxide groups (bridging oxygen atoms), other functional groups found experimentally are: carbonyl (C=O), hydroxyl (-OH), phenol and for graphite oxides prepared using sulphuric acid (e.g. Hummers method) some impurity of sulphur is often found, for example in a form of organosulfate groups. The detailed structure is still not understood due to the strong disorder and irregular packing of the layers. Graphene oxide layers are about 1.1 ± 0.2 nm thick. Scanning tunneling microscopy shows the presence of local regions where oxygen atoms are arranged in a rectangular pattern with lattice constant 0.27 nm × 0.41 nm. The edges of each layer are terminated with carboxyl and carbonyl groups. X-ray photoelectron spectroscopy shows the presence of several C1s peaks, their number and relative intensity depending on the particular oxidation method used. Assignment of these peaks to certain carbon functionalization types is somewhat uncertain and still under debate. For example, one interpretation goes as follows: non-oxygenated ring contexts (284.8 eV), C-O (286.2 eV), C=O (287.8 eV) and O-C=O (289.0 eV). Another interpretation, using density functional theory calculation, goes as follows: C=C with defects such as functional groups and pentagons (283.6 eV), C=C (non-oxygenated ring contexts) (284.3 eV), sp3C-H in the basal plane and C=C with functional groups (285.0 eV), C=O and C=C with functional groups, C-O (286.5 eV), and O-C=O (288.3 eV). Graphite oxide is hydrophilic and easily hydrated when exposed to water vapor or immersed in liquid water, resulting in a distinct increase of the inter-planar distance (up to 1.2 nm in saturated state). Additional water is also incorporated into the interlayer space due to high pressure induced effects. The maximal hydration state of graphite oxide in liquid water corresponds to insertion of 2-3 water monolayers. Cooling the graphite oxide/H2O samples results in "pseudo-negative thermal expansion" and cooling below the freezing point of water results in de-insertion of one water monolayer and lattice contraction. Complete removal of water from the structure seems difficult since heating at 60–80 °C results in partial decomposition and degradation of the material. Similar to water, graphite oxide easily incorporates other polar solvents, e.g. alcohols. However, intercalation of polar solvents occurs significantly different in Brodie and Hummers graphite oxides. Brodie graphite oxide is intercalated at ambient conditions by one monolayer of alcohols and several other solvents (e.g. dimethylformamide and acetone) when liquid solvent is available in excess. Separation of graphite oxide layers is proportional to the size of alcohol molecule. Cooling of Brodie graphite oxide immersed in excess of liquid methanol, ethanol, acetone and dimethylformamide results in step-like insertion of an additional solvent monolayer and lattice expansion. The phase transition detected by X-ray diffraction and differential scanning calorimetry (DSC) is reversible; de-insertion of solvent monolayer is observed when sample is heated back from low temperatures. An additional methanol and ethanol monolayer is reversibly inserted into the structure of Brodie graphite oxide under high pressure conditions. Hummers graphite oxide is intercalated with two methanol or ethanol monolayers at ambient temperature. The interlayer distance of Hummers graphite oxide in an excess of liquid alcohols increases gradually upon temperature decrease, reaching 19.4 and 20.6 Å at 140 K for methanol and ethanol, respectively. The gradual expansion of the Hummers graphite oxide lattice upon cooling corresponds to insertion of at least two additional solvent monolayers. Graphite oxide exfoliates and decomposes when rapidly heated at moderately high temperatures (~280–300 °C) with formation of finely dispersed amorphous carbon, somewhat similar to activated carbon. Characterization XRD, FTIR, Raman, XPS, AFM, TEM, SEM/EDX, Thermogravimetric analysis etc. are some common techniques used to characterize GO samples. Experimental results of graphite/graphene oxide have been analyzed by calculation in detail. Since the distribution of oxygen functionalities on GO sheets is polydisperse, fractionation methods can be used to characterize and separate GO sheets on the basis of oxidation. Different synthesis methods give rise to different types of graphene oxide. Even different batches from similar oxidation methods can have differences in their properties due to variations in purification or quenching processes. Surface properties It is also possible to modify the surface of graphene oxide to change its properties. Graphene oxide has unique surface properties which make it a very good surfactant material stabilizing various emulsion systems. Graphene oxide remains at the interface of the emulsions systems due to the difference in surface energy of the two phases separated by the interface. Relation to water Graphite oxides absorb moisture in proportion to humidity and swell in liquid water. The amount of water absorbed by graphite oxides depends on the particular synthesis method and shows a strong temperature dependence. Brodie graphite oxide selectively absorbs methanol from water/methanol mixtures in a certain range of methanol concentrations. Membranes prepared from graphite oxides (recently more often called "graphene oxide" membranes) are vacuum tight and impermeable to nitrogen and oxygen, but are permeable to water vapors. The membranes are also impermeable to "substances of lower molecular weight". Permeation of graphite and graphene oxide membranes by polar solvents is possible due to swelling of the graphite oxide structure. The membranes in swelled state are also permeable by gases, e.g. helium. Graphene oxide sheets are chemically reactive in liquid water, leading them to acquire a small negative charge. The interlayer distance of dried graphite oxides was reported as ~6–7 Å but in liquid water it increases up to 11–13 Å at room temperature. The lattice expansion becomes stronger at lower temperatures. The inter-layer distance in diluted NaOH reached infinity, resulting in dispersion of graphite oxide into single-layered graphene oxide sheets in solution. Graphite oxide can be used as a cation exchange membrane for materials such as KCl, HCl, CaCl2, MgCl2, BaCl2 solutions. The membranes were permeable by large alkali ions as they are able to penetrate between graphene oxide layers. Applications Optical nonlinearity Nonlinear optical materials are of great importance for ultrafast photonics and optoelectronics. Recently, the giant optical nonlinearities of graphene oxide (GO) has proven useful for a number of applications. For example, the optical limiting of GO is indispensable in the protection of sensitive instruments from laser-induced damage. And the saturable absorption can be used for pulse compression, mode-locking and Q-switching. Also, the nonlinear refraction (Kerr effect) is crucial for applications including all-optical switching, signal regeneration, and fast optical communications. One of the most intriguing and unique properties of GO is that its electrical and optical properties can be tuned dynamically by manipulating the content of oxygen-containing groups through either chemical or physical reduction methods. The tuning of the optical nonlinearities has been demonstrated during the laser-induced reduction process through the continuous increase of the laser irradiance, and four stages of different nonlinear activities have been discovered, which may serve as promising solid state materials for novel nonlinear functional devices. And metal nanoparticles can greatly enhance the optical nonlinearity and fluorescence of graphene oxide. Graphene manufacture Graphite oxide has attracted much interest as a possible route for the large-scale production and manipulation of graphene, a material with extraordinary electronic properties. Graphite oxide itself is an insulator, almost a semiconductor, with differential conductivity between 1 and 5×10−3 S/cm at a bias voltage of 10 V. However, being hydrophilic, graphite oxide disperses readily in water, breaking up into macroscopic flakes, mostly one layer thick. Chemical reduction of these flakes would yield a suspension of graphene flakes. It was argued that the first experimental observation of graphene was reported by Hanns-Peter Boehm in 1962. In this early work the existence of monolayer reduced graphene oxide flakes was demonstrated. The contribution of Boehm was recently acknowledged by Andre Geim, the Nobel Prize winner for graphene research. Partial reduction can be achieved by treating the suspended graphene oxide with hydrazine hydrate at 100 °C for 24 hours, by exposing graphene oxide to hydrogen plasma for a few seconds, or by exposure to a strong pulse of light, such as that of a xenon flash. Due to the oxidation protocol, manifold defects already present in graphene oxide hamper the effectiveness of the reduction. Thus, the graphene quality obtained after reduction is limited by the precursor quality (graphene oxide) and the efficiency of the reducing agent. However, the conductivity of the graphene obtained by this route is below 10 S/cm, and the charge mobility is between 0.1 and 10 cm2/Vs. These values are much greater than the oxide's, but still a few orders of magnitude lower than those of pristine graphene. Recently, the synthetic protocol for graphite oxide was optimized and almost intact graphene oxide with a preserved carbon framework was obtained. Reduction of this almost intact graphene oxide performs much better and the mobility values of charge carriers exceeds 1000 cm2/Vs for the best quality of flakes. Inspection with the atomic force microscope shows that the oxygen bonds distort the carbon layer, creating a pronounced intrinsic roughness in the oxide layers which persists after reduction. These defects also show up in Raman spectra of graphene oxide. Large amounts of graphene sheets may also be produced through thermal methods. For example, in 2006 a method was discovered that simultaneously exfoliates and reduces graphite oxide by rapid heating (>2000 °C/min) to 1050 °C. At this temperature, carbon dioxide is released as the oxygen functionalities are removed and it explosively separates the sheets as it comes out. The temperature of reduction is important for the oxygen content of the final product, with higher degree of reduction for higher reduction temperatures. Exposing a film of graphite oxide to the laser of a LightScribe DVD has also revealed to produce quality graphene at a low cost. Graphene oxide has also been reduced to graphene in situ, using a 3D printed pattern of engineered E. coli bacteria. Coupling of graphene oxide with biomolecules such as peptide, proteins and enzymes enhances its biomedical applications. Currently, researchers are focussed on reducing graphene oxide using non-toxic substances; tea and coffee powder, lemon extract and various plants based antioxidants are widely used. Water purification Graphite oxides were studied for desalination of water using reverse osmosis beginning in the 1960s. In 2011 additional research was released. In 2013 Lockheed Martin announced their Perforene graphene filter. Lockheed claims the filter reduces the energy costs of reverse osmosis desalination by 99%. Lockheed claimed that the filter was 500 times thinner than the best filter then on the market, one thousand times stronger and requires 1% of the pressure. The product was not expected to be released until 2020. Another study showed that graphite oxide could be engineered to allow water to pass, but retain some larger ions. Narrow capillaries allow rapid permeation by mono- or bilayer water. Multilayer laminates have a structure similar to nacre, which provides mechanical strength in water free conditions. Helium cannot pass through the membranes in humidity free conditions, but penetrates easily when exposed to humidity, whereas water vapor passes with no resistance. Dry laminates are vacuum-tight, but immersed in water, they act as molecular sieves, blocking some solutes. A third project produced graphene sheets with subnanoscale (0.40 ± 0.24 nm) pores. The graphene was bombarded with gallium ions, which disrupt carbon bonds. Etching the result with an oxidizing solution produces a hole at each spot struck by a gallium ion. The length of time spent in the oxidizing solution determined average pore size. Pore density reached 5 trillion pores per square centimeter, while retaining structural integrity. The pores permitted cation transport after short oxidation periods, consistent with electrostatic repulsion from negatively charged functional groups at pore edges. After longer oxidation periods, sheets were permeable to salt but not larger organic molecules. In 2015 a team created a graphene oxide tea that over the course of a day removed 95% of heavy metals in a water solution. A composite consisting of NiFe2O4 small ferrimagnetic nanoparticles and partially reduced graphene oxide functionalized with nitrogen atoms was successfully used to remove Cr(III) ion from water. The advantage of this nanocomposite is that it can be separated from water magnetically. One project layered carbon atoms in a honeycomb structure, forming a hexagon-shaped crystal that measured about 0.1 millimeters in width and length, with subnanometer holes. Later work increased the membrane size to on the order of several millimeters. Graphene attached to a polycarbonate support structure was initially effective at removing salt. However, defects formed in the graphene. Filling larger defects with nylon and small defects with hafnium metal followed by a layer of oxide restored the filtration effect. In 2016 engineers developed graphene-based films powered by the sun that can filter dirty/salty water. Bacteria were used to produce a material consisting of two nanocellulose layers. The lower layer contains pristine cellulose, while the top layer contains cellulose and graphene oxide, which absorbs sunlight and produces heat. The system draws water from below into the material. The water diffuses into the higher layer, where it evaporates and leaves behind any contaminants. The evaporate condenses on top, where it can be captured. The film is produced by repeatedly adding a fluid coating that hardens. Bacteria produce nanocellulose fibers with interspersed graphene oxide flakes. The film is light and easily manufactured at scale. Coating Optically transparent, multilayer films made from graphene oxide are impermeable under dry conditions. Exposed to water (or water vapor), they allow passage of molecules below a certain size. The films consist of millions of randomly stacked flakes, leaving nano-sized capillaries between them. Closing these nanocapillaries using chemical reduction with hydroiodic acid creates "reduced graphene oxide" (r-GO) films that are completely impermeable to gases, liquids or strong chemicals greater than 100 nanometers thick. Glassware or copper plates covered with such a graphene "paint" can be used as containers for corrosive acids. Graphene-coated plastic films could be used in medical packaging to improve shelf life. Layer-by-layer coatings based on amine-modified graphene oxide and Nafion show excellent antimicrobial performance that is not compromised when heated for 2 hours at 200 °C. Related materials Dispersed graphene oxide flakes can also be sifted out of the dispersion (as in paper manufacture) and pressed to make an exceedingly strong graphene oxide paper. Graphene oxide has been used in DNA analysis applications. The large planar surface of graphene oxide allows simultaneous quenching of multiple DNA probes labeled with different dyes, providing the detection of multiple DNA targets in the same solution. Further advances in graphene oxide based DNA sensors could result in very inexpensive rapid DNA analysis. Recently a group of researchers, from university of L'Aquila (Italy), discovered new wetting properties of graphene oxide thermally reduced in ultra-high vacuum up to 900 °C. They found a correlation between the surface chemical composition, the surface free energy and its polar and dispersive components, giving a rationale for the wetting properties of graphene oxide and reduced graphene oxide. Flexible rechargeable battery electrode Graphene oxide has been demonstrated as a flexible free-standing battery anode material for room temperature lithium-ion and sodium-ion batteries. It is also being studied as a high surface area conducting agent in lithium-sulfur battery cathodes. The functional groups on graphene oxide can serve as sites for chemical modification and immobilization of active species. This approach allows for the creation of hybrid architectures for electrode materials. Recent examples of this have been implemented in lithium-ion batteries, which are known for being rechargeable at the cost of low capacity limits. Graphene oxide-based composites functionalized with metal oxides and sulfides have been shown in recent research to induce enhanced battery performance. This has similarly been adapted into applications in supercapacitors, since the electronic properties of graphene oxide allow it to bypass some of the more prevalent restrictions of typical transition metal oxide electrodes. Research in this field is developing, with additional exploration into methods involving nitrogen doping and pH adjustment to improve capacitance. Additionally, research into reduced graphene oxide sheets, which display superior electronic properties akin to pure graphene, is currently being explored. Reduced graphene oxide greatly increases the conductivity and efficiency, while sacrificing some flexibility and structural integrity. Graphene oxide lens The optical lens has been playing a critical role in almost all areas of science and technology since its invention about 3000 years ago. With the advances in micro- and nanofabrication techniques, continued miniaturization of the conventional optical lenses has always been requested for various applications such as communications, sensors, data storage and a wide range of other technology-driven and consumer-driven industries. Specifically, ever smaller sizes, as well as thinner thicknesses of micro lenses, are highly needed for subwavelength optics or nano-optics with extremely small structures, particularly for visible and near-IR applications. Also, as the distance scale for optical communications shrinks, the required feature sizes of micro lenses are rapidly pushed down. Recently, the excellent properties of newly discovered graphene oxide provide novel solutions to overcome the challenges of current planar focusing devices. Specifically, giant refractive index modification (as large as 10^-1), which is one order of magnitude larger than the current materials, between graphene oxide (GO) and reduced graphene oxide (rGO) have been demonstrated by dynamically manipulating its oxygen content using the direct laser writing (DLW) method. As a result, the overall lens thickness can be potentially reduced by more than ten times. Also, the linear optical absorption of GO is found to increase as the reduction of GO deepens, which results in transmission contrast between GO and rGO and therefore provides an amplitude modulation mechanism. Moreover, both the refractive index and the optical absorption are found to be dispersionless over a broad wavelength range from visible to near infrared. Finally, GO film offers flexible patterning capability by using the maskless DLW method, which reduces the manufacturing complexity and requirements. As a result, a novel ultrathin planar lens on a GO thin film has been realized recently using the DLW method. The distinct advantage of the GO flat lens is that phase modulation and amplitude modulation can be achieved simultaneously, which are attributed to the giant refractive index modulation and the variable linear optical absorption of GO during its reduction process, respectively. Due to the enhanced wavefront shaping capability, the lens thickness is pushed down to subwavelength scale (~200 nm), which is thinner than all current dielectric lenses (~ μm scale). The focusing intensities and the focal length can be controlled effectively by varying the laser powers and the lens sizes, respectively. By using an oil immersion high numerical aperture (NA) objective during DLW process, 300 nm fabrication feature size on GO film has been realized, and therefore the minimum lens size has been shrunk down to 4.6 μm in diameter, which is the smallest planar micro lens and can only be realized with metasurface by FIB. Thereafter, the focal length can be reduced to as small as 0.8 μm, which would potentially increase the numerical aperture (NA) and the focusing resolution. The full-width at half-maximum (FWHM) of 320 nm at the minimum focal spot using a 650 nm input beam has been demonstrated experimentally, which corresponding to the effective NA of 1.24 (n=1.5), the largest NA of current micro lenses. Furthermore, ultra-broadband focusing capability from 500 nm to as far as 2 μm have been realized with the same planar lens, which is still a major challenge of focusing in infrared range due to limited availability of suitable materials and fabrication technology. Most importantly, the synthesized high quality GO thin films can be flexibly integrated on various substrates and easily manufactured by using the one-step DLW method over a large area at a comparable low cost and power (~nJ/pulse), which eventually makes the GO flat lenses promising for various practical applications. Energy conversion Photocatalytic water splitting is an artificial photosynthesis process in which water is dissociated into hydrogen (H2) and oxygen (O2), using artificial or natural light. Methods such as photocatalytic water splitting are currently being investigated to produce hydrogen as a clean source of energy. The superior electron mobility and high surface area of graphene oxide sheets suggest it may be implemented as a catalyst that meets the requirements for this process. Specifically, graphene oxide's compositional functional groups of epoxide (-O-) and hydroxide (-OH) allow for more flexible control in the water splitting process. This flexibility can be used to tailor the band gap and band positions that are targeted in photocatalytic water splitting. Recent research experiments have demonstrated that the photocatalytic activity of graphene oxide containing a band gap within the required limits has produced effective splitting results, particularly when used with 40-50% coverage at a 2:1 hydroxide:epoxide ratio. When used in composite materials with CdS (a typical catalyst used in photocatalytic water splitting), graphene oxide nanocomposites have been shown to exhibit increased hydrogen production and quantum efficiency. Hydrogen storage Graphene oxide is also being explored for its applications in hydrogen storage. Hydrogen molecules can be stored among the oxygen-based functional groups found throughout the sheet. This hydrogen storage capability can be further manipulated by modulating the interlayer distance between sheets, as well as making changes to the pore sizes. Research in transition metal decoration on carbon sorbents to enhance hydrogen binding energy has led to experiments with titanium and magnesium anchored to hydroxyl groups, allowing for the binding of multiple hydrogen molecules. Precision medicine Graphene oxide has been studied for its promising uses in a wide variety of nanomedical applications including tissue engineering, cancer treatment, medical imaging, and drug delivery. Its physiochemical properties allow for a structure to regulate the behaviour of stem cells, with the potential to assist in the intracellular delivery of DNA, growth factors, and synthetic proteins that could allow for the repair and regeneration of muscle tissue. Due to its unique behaviour in biological environments, GO has also been proposed as a novel material in early cancer diagnosis. It has also been explored for its uses in vaccines and immunotherapy, including as a dual-use adjuvant and carrier of biomedical materials. In September 2020, researchers at the Shanghai National Engineering Research Center for Nanotechnology in China filed a patent for use of graphene oxide in a recombinant vaccine under development against SARS-CoV-2. Toxicity Several typical mechanisms underlying graphene (oxide) nanomaterial's toxicity have been revealed, for instance, physical destruction, oxidative stress, DNA damage, inflammatory response, apoptosis, autophagy, and necrosis. In these mechanisms, toll-like receptors (TLR), transforming growth factor-beta (TGF-β) and tumor necrosis factor-alpha (TNF-α) dependent-pathways are involved in the signalling pathway network, and oxidative stress plays a crucial role in these pathways. Many experiments have shown that graphene (oxide) nanomaterials have toxic side effects in many biological applications, but more in-depth study of toxicity mechanisms is needed. According to the USA FDA, graphene, graphene oxide, and reduced graphene oxide elicit toxic effects both in vitro and in vivo. Graphene-family nanomaterials (GFN) are not approved by the USA FDA for human consumption.
Physical sciences
Ceramic compounds
Chemistry
20310362
https://en.wikipedia.org/wiki/Stellar%20kinematics
Stellar kinematics
In astronomy, stellar kinematics is the observational study or measurement of the kinematics or motions of stars through space. Stellar kinematics encompasses the measurement of stellar velocities in the Milky Way and its satellites as well as the internal kinematics of more distant galaxies. Measurement of the kinematics of stars in different subcomponents of the Milky Way including the thin disk, the thick disk, the bulge, and the stellar halo provides important information about the formation and evolutionary history of our Galaxy. Kinematic measurements can also identify exotic phenomena such as hypervelocity stars escaping from the Milky Way, which are interpreted as the result of gravitational encounters of binary stars with the supermassive black hole at the Galactic Center. Stellar kinematics is related to but distinct from the subject of stellar dynamics, which involves the theoretical study or modeling of the motions of stars under the influence of gravity. Stellar-dynamical models of systems such as galaxies or star clusters are often compared with or tested against stellar-kinematic data to study their evolutionary history and mass distributions, and to detect the presence of dark matter or supermassive black holes through their gravitational influence on stellar orbits. Space velocity The component of stellar motion toward or away from the Sun, known as radial velocity, can be measured from the spectrum shift caused by the Doppler effect. The transverse, or proper motion must be found by taking a series of positional determinations against more distant objects. Once the distance to a star is determined through astrometric means such as parallax, the space velocity can be computed. This is the star's actual motion relative to the Sun or the local standard of rest (LSR). The latter is typically taken as a position at the Sun's present location that is following a circular orbit around the Galactic Center at the mean velocity of those nearby stars with low velocity dispersion. The Sun's motion with respect to the LSR is called the "peculiar solar motion". The components of space velocity in the Milky Way's Galactic coordinate system are usually designated U, V, and W, given in km/s, with U positive in the direction of the Galactic Center, V positive in the direction of galactic rotation, and W positive in the direction of the North Galactic Pole. The peculiar motion of the Sun with respect to the LSR is (U, V, W) = (11.1, 12.24, 7.25) km/s, with statistical uncertainty (+0.69−0.75, +0.47−0.47, +0.37−0.36) km/s and systematic uncertainty (1, 2, 0.5) km/s. (Note that V is 7 km/s larger than estimated in 1998 by Dehnen et al.) Use of kinematic measurements Stellar kinematics yields important astrophysical information about stars, and the galaxies in which they reside. Stellar kinematics data combined with astrophysical modeling produces important information about the galactic system as a whole. Measured stellar velocities in the innermost regions of galaxies including the Milky Way have provided evidence that many galaxies host supermassive black holes at their center. In farther out regions of galaxies such as within the galactic halo, velocity measurements of globular clusters orbiting in these halo regions of galaxies provides evidence for dark matter. Both of these cases derive from the key fact that stellar kinematics can be related to the overall potential in which the stars are bound. This means that if accurate stellar kinematics measurements are made for a star or group of stars orbiting in a certain region of a galaxy, the gravitational potential and mass distribution can be inferred given that the gravitational potential in which the star is bound produces its orbit and serves as the impetus for its stellar motion. Examples of using kinematics combined with modeling to construct an astrophysical system include: Rotation of the Milky Way's disc: From the proper motions and radial velocities of stars within the Milky way disc one can show that there is differential rotation. When combining these measurements of stars' proper motions and their radial velocities, along with careful modeling, it is possible to obtain a picture of the rotation of the Milky Way disc. The local character of galactic rotation in the solar neighborhood is encapsulated in the Oort constants. Structural components of the Milky Way: Using stellar kinematics, astronomers construct models which seek to explain the overall galactic structure in terms of distinct kinematic populations of stars. This is possible because these distinct populations are often located in specific regions of galaxies. For example, within the Milky Way, there are three primary components, each with its own distinct stellar kinematics: the disc, halo and bulge or bar. These kinematic groups are closely related to the stellar populations in the Milky Way, forming a strong correlation between the motion and chemical composition, thus indicating different formation mechanisms. For the Milky Way, the speed of disk stars is and an RMS (Root mean square) velocity relative to this speed of . For bulge population stars, the velocities are randomly oriented with a larger relative RMS velocity of and no net circular velocity. The Galactic stellar halo consists of stars with orbits that extend to the outer regions of the galaxy. Some of these stars will continually orbit far from the galactic center, while others are on trajectories which bring them to various distances from the galactic center. These stars have little to no average rotation. Many stars in this group belong to globular clusters which formed long ago and thus have a distinct formation history, which can be inferred from their kinematics and poor metallicities. The halo may be further subdivided into an inner and outer halo, with the inner halo having a net prograde motion with respect to the Milky Way and the outer a net retrograde motion. External galaxies: Spectroscopic observations of external galaxies make it possible to characterize the bulk motions of the stars they contain. While these stellar populations in external galaxies are generally not resolved to the level where one can track the motion of individual stars (except for the very nearest galaxies) measurements of the kinematics of the integrated stellar population along the line of sight provides information including the mean velocity and the velocity dispersion which can then be used to infer the distribution of mass within the galaxy. Measurement of the mean velocity as a function of position gives information on the galaxy's rotation, with distinct regions of the galaxy that are redshifted / blueshifted in relation to the galaxy's systemic velocity. Mass distributions: Through measurement of the kinematics of tracer objects such as globular clusters and the orbits of nearby satellite dwarf galaxies, we can determine the mass distribution of the Milky Way or other galaxies. This is accomplished by combining kinematic measurements with dynamical modeling. Recent advancements due to Gaia In 2018, the Gaia Data Release 2 (GAIA DR2) marked a significant advancement in stellar kinematics, offering a rich dataset of precise measurements. This release included detailed stellar kinematic and stellar parallax data, contributing to a more nuanced understanding of the Milky Way's structure. Notably, it facilitated the determination of proper motions for numerous celestial objects, including the absolute proper motions of 75 globular clusters situated at distances extending up to and a bright limit of . Furthermore, Gaia's comprehensive dataset enabled the measurement of absolute proper motions in nearby dwarf spheroidal galaxies, serving as crucial indicators for understanding the mass distribution within the Milky Way. GAIA DR3 improved the quality of previously published data by providing detailed astrophysical parameters. While the complete GAIA DR4 is yet to be unveiled, the latest release offers enhanced insights into white dwarfs, hypervelocity stars, cosmological gravitational lensing, and the merger history of the Galaxy. Stellar kinematic types Stars within galaxies may be classified based on their kinematics. For example, the stars in the Milky Way can be subdivided into two general populations, based on their metallicity, or proportion of elements with atomic numbers higher than helium. Among nearby stars, it has been found that population I stars with higher metallicity are generally located in the stellar disk while older population II stars are in random orbits with little net rotation. The latter have elliptical orbits that are inclined to the plane of the Milky Way. Comparison of the kinematics of nearby stars has also led to the identification of stellar associations. These are most likely groups of stars that share a common point of origin in giant molecular clouds. There are many additional ways to classify stars based on their measured velocity components, and this provides detailed information about the nature of the star's formation time, its present location, and the general structure of the galaxy. As a star moves in a galaxy, the smoothed out gravitational potential of all the other stars and other mass within the galaxy plays a dominant role in determining the stellar motion. Stellar kinematics can provide insights into the location of where the star formed within the galaxy. Measurements of an individual star's kinematics can identify stars that are peculiar outliers such as a high-velocity star moving much faster than its nearby neighbors. High-velocity stars Depending on the definition, a high-velocity star is a star moving faster than 65 km/s to 100 km/s relative to the average motion of the other stars in the star's neighborhood. The velocity is also sometimes defined as supersonic relative to the surrounding interstellar medium. The three types of high-velocity stars are: runaway stars, halo stars and hypervelocity stars. High-velocity stars were studied by Jan Oort, who used their kinematic data to predict that high-velocity stars have very little tangential velocity. Runaway stars A runaway star is one that is moving through space with an abnormally high velocity relative to the surrounding interstellar medium. The proper motion of a runaway star often points exactly away from a stellar association, of which the star was formerly a member, before it was hurled out. Mechanisms that may give rise to a runaway star include: Gravitational interactions between stars in a stellar system can result in large accelerations of one or more of the involved stars. In some cases, stars may even be ejected. This can occur in seemingly stable star systems of only three stars, as described in studies of the three-body problem in gravitational theory. A collision or close encounter between stellar systems, including galaxies, may result in the disruption of both systems, with some of the stars being accelerated to high velocities, or even ejected. A large-scale example is the gravitational interaction between the Milky Way and the Large Magellanic Cloud. A supernova explosion in a multiple star system can accelerate both the supernova remnant and remaining stars to high velocities. Multiple mechanisms may accelerate the same runaway star. For example, a massive star that was originally ejected due to gravitational interactions with its stellar neighbors may itself go supernova, producing a remnant with a velocity modulated by the supernova kick. If this supernova occurs in the very nearby vicinity of other stars, it is possible that it may produce more runaways in the process. An example of a related set of runaway stars is the case of AE Aurigae, 53 Arietis and Mu Columbae, all of which are moving away from each other at velocities of over 100 km/s (for comparison, the Sun moves through the Milky Way at about 20 km/s faster than the local average). Tracing their motions back, their paths intersect near to the Orion Nebula about 2 million years ago. Barnard's Loop is believed to be the remnant of the supernova that launched the other stars. Another example is the X-ray object Vela X-1, where photodigital techniques reveal the presence of a typical supersonic bow shock hyperbola. Halo stars Halo stars are very old stars that have a low metallicity and do not follow circular orbits around the center of the Milky Way within its disk. Instead, the halo stars travel in elliptical orbits, often inclined to the disk, which take them well above and below the plane of the Milky Way. Although their orbital velocities relative to the Milky Way may be no faster than disk stars, their different paths result in high relative velocities. Typical examples are the halo stars passing through the disk of the Milky Way at steep angles. One of the nearest 45 stars, called Kapteyn's Star, is an example of the high-velocity stars that lie near the Sun: Its observed radial velocity is −245 km/s, and the components of its space velocity are and Hypervelocity stars Hypervelocity stars (designated as HVS or HV in stellar catalogues) have substantially higher velocities than the rest of the stellar population of a galaxy. Some of these stars may even exceed the escape velocity of the galaxy. In the Milky Way, stars usually have velocities on the order of 100 km/s, whereas hypervelocity stars typically have velocities on the order of 1000 km/s. Most of these fast-moving stars are thought to be produced near the center of the Milky Way, where there is a larger population of these objects than further out. One of the fastest known stars in our Galaxy is the O-class sub-dwarf US 708, which is moving away from the Milky Way with a total velocity of around 1200 km/s. Jack G. Hills first predicted the existence of HVSs in 1988. This was later confirmed in 2005 by Warren Brown, Margaret Geller, Scott Kenyon, and Michael Kurtz. 10 unbound HVSs were known, one of which is believed to have originated from the Large Magellanic Cloud rather than the Milky Way. Further measurements placed its origin within the Milky Way. Due to uncertainty about the distribution of mass within the Milky Way, determining whether a HVS is unbound is difficult. A further five known high-velocity stars may be unbound from the Milky Way, and 16 HVSs are thought to be bound. The nearest currently known HVS (HVS2) is about 19 kpc from the Sun. , there have been roughly 20 observed hypervelocity stars. Though most of these were observed in the Northern Hemisphere, the possibility remains that there are HVSs only observable from the Southern Hemisphere. It is believed that about 1,000 HVSs exist in the Milky Way. Considering that there are around 100 billion stars in the Milky Way, this is a minuscule fraction (~0.000001%). Results from the second data release of Gaia (DR2) show that most high-velocity late-type stars have a high probability of being bound to the Milky Way. However, distant hypervelocity star candidates are more promising. In March 2019, LAMOST-HVS1 was reported to be a confirmed hypervelocity star ejected from the stellar disk of the Milky Way. In July 2019, astronomers reported finding an A-type star, S5-HVS1, traveling , faster than any other star detected so far. The star is in the Grus (or Crane) constellation in the southern sky and is about from Earth. It may have been ejected from the Milky Way after interacting with Sagittarius A*, the supermassive black hole at the center of the galaxy. Origin of hypervelocity stars HVSs are believed to predominantly originate by close encounters of binary stars with the supermassive black hole in the center of the Milky Way. One of the two partners is gravitationally captured by the black hole (in the sense of entering orbit around it), while the other escapes with high velocity, becoming a HVS. Such maneuvers are analogous to the capture and ejection of interstellar objects by a star. Supernova-induced HVSs may also be possible, although they are presumably rare. In this scenario, a HVS is ejected from a close binary system as a result of the companion star undergoing a supernova explosion. Ejection velocities up to 770 km/s, as measured from the galactic rest frame, are possible for late-type B-stars. This mechanism can explain the origin of HVSs which are ejected from the galactic disk. Known HVSs are main-sequence stars with masses a few times that of the Sun. HVSs with smaller masses are also expected and G/K-dwarf HVS candidates have been found. Some HVSs may have originated from a disrupted dwarf galaxy. When it made its closest approach to the center of the Milky Way, some of its stars broke free and were thrown into space, due to the slingshot-like effect of the boost. Some neutron stars are inferred to be traveling with similar speeds. This could be related to HVSs and the HVS ejection mechanism. Neutron stars are the remnants of supernova explosions, and their extreme speeds are very likely the result of an asymmetric supernova explosion or the loss of their near partner during the supernova explosions that forms them. The neutron star RX J0822-4300, which was measured to move at a record speed of over 1,500 km/s (0.5% of the speed of light) in 2007 by the Chandra X-ray Observatory, is thought to have been produced the first way. One theory regarding the ignition of Type Ia supernovae invokes the onset of a merger between two white dwarfs in a binary star system, triggering the explosion of the more massive white dwarf. If the less massive white dwarf is not destroyed during the explosion, it will no longer be gravitationally bound to its destroyed companion, causing it to leave the system as a hypervelocity star with its pre-explosion orbital velocity of 1000–2500 km/s. In 2018, three such stars were discovered using data from the Gaia satellite. Partial list of HVSs As of 2014, twenty HVS were known. HVS 1 – (SDSS J090744.99+024506.8) (a.k.a. The Outcast Star) – the first hypervelocity star to be discovered HVS 2 – (SDSS J093320.86+441705.4 or US 708) HVS 3 – (HE 0437-5439) – possibly from the Large Magellanic Cloud HVS 4 – (SDSS J091301.00+305120.0) HVS 5 – (SDSS J091759.42+672238.7) HVS 6 – (SDSS J110557.45+093439.5) HVS 7 – (SDSS J113312.12+010824.9) HVS 8 – (SDSS J094214.04+200322.1) HVS 9 – (SDSS J102137.08-005234.8) HVS 10 – (SDSS J120337.85+180250.4) Kinematic groups A set of stars with similar space motion and ages is known as a kinematic group. These are stars that could share a common origin, such as the evaporation of an open cluster, the remains of a star forming region, or collections of overlapping star formation bursts at differing time periods in adjacent regions. Most stars are born within molecular clouds known as stellar nurseries. The stars formed within such a cloud compose gravitationally bound open clusters containing dozens to thousands of members with similar ages and compositions. These clusters dissociate with time. Groups of young stars that escape a cluster, or are no longer bound to each other, form stellar associations. As these stars age and disperse, their association is no longer readily apparent and they become moving groups of stars. Astronomers are able to determine if stars are members of a kinematic group because they share the same age, metallicity, and kinematics (radial velocity and proper motion). As the stars in a moving group formed in proximity and at nearly the same time from the same gas cloud, although later disrupted by tidal forces, they share similar characteristics. Stellar associations A stellar association is a very loose star cluster, whose stars share a common origin and are still moving together through space, but have become gravitationally unbound. Associations are primarily identified by their common movement vectors and ages. Identification by chemical composition is also used to factor in association memberships. Stellar associations were first discovered by the Armenian astronomer Viktor Ambartsumian in 1947. The conventional name for an association uses the names or abbreviations of the constellation (or constellations) in which they are located; the association type, and, sometimes, a numerical identifier. Types Viktor Ambartsumian first categorized stellar associations into two groups, OB and T, based on the properties of their stars. A third category, R, was later suggested by Sidney van den Bergh for associations that illuminate reflection nebulae. The OB, T, and R associations form a continuum of young stellar groupings. But it is currently uncertain whether they are an evolutionary sequence, or represent some other factor at work. Some groups also display properties of both OB and T associations, so the categorization is not always clear-cut. OB associations Young associations will contain 10 to 100 massive stars of spectral class O and B, and are known as OB associations. In addition, these associations also contain hundreds or thousands of low- and intermediate-mass stars. Association members are believed to form within the same small volume inside a giant molecular cloud. Once the surrounding dust and gas is blown away, the remaining stars become unbound and begin to drift apart. It is believed that the majority of all stars in the Milky Way were formed in OB associations. O-class stars are short-lived, and will expire as supernovae after roughly one million years. As a result, OB associations are generally only a few million years in age or less. The O-B stars in the association will have burned all their fuel within ten million years. (Compare this to the current age of the Sun at about five billion years.) The Hipparcos satellite provided measurements that located a dozen OB associations within 650 parsecs of the Sun. The nearest OB association is the Scorpius–Centaurus association, located about 400 light-years from the Sun. OB associations have also been found in the Large Magellanic Cloud and the Andromeda Galaxy. These associations can be quite sparse, spanning 1,500 light-years in diameter. T associations Young stellar groups can contain a number of infant T Tauri stars that are still in the process of entering the main sequence. These sparse populations of up to a thousand T Tauri stars are known as T associations. The nearest example is the Taurus-Auriga T association (Tau–Aur T association), located at a distance of 140 parsecs from the Sun. Other examples of T associations include the R Corona Australis T association, the Lupus T association, the Chamaeleon T association and the Velorum T association. T associations are often found in the vicinity of the molecular cloud from which they formed. Some, but not all, include O–B class stars. Group members have the same age and origin, the same chemical composition, and the same amplitude and direction in their vector of velocity. R associations Associations of stars that illuminate reflection nebulae are called R associations, a name suggested by Sidney van den Bergh after he discovered that the stars in these nebulae had a non-uniform distribution. These young stellar groupings contain main sequence stars that are not sufficiently massive to disperse the interstellar clouds in which they formed. This allows the properties of the surrounding dark cloud to be examined by astronomers. Because R associations are more plentiful than OB associations, they can be used to trace out the structure of the galactic spiral arms. An example of an R association is Monoceros R2, located 830 ± 50 parsecs from the Sun. Moving groups If the remnants of a stellar association drift through the Milky Way as a somewhat coherent assemblage, then they are termed a moving group or kinematic group. Moving groups can be old, such as the HR 1614 moving group at two billion years, or young, such as the AB Dor Moving Group at only 120 million years. Moving groups were studied intensely by Olin Eggen in the 1960s. A list of the nearest young moving groups has been compiled by López-Santiago et al. The closest is the Ursa Major Moving Group which includes all of the stars in the Plough / Big Dipper asterism except for Dubhe and Alkaid. This is sufficiently close that the Sun lies in its outer fringes, without being part of the group. Hence, although members are concentrated at declinations near 60°N, some outliers are as far away across the sky as Triangulum Australe at 70°S. The list of young moving groups is constantly evolving. The Banyan Σ tool currently lists 29 nearby young moving groups Recent additions to nearby moving groups are the Volans-Carina Association (VCA), discovered with Gaia, and the Argus Association (ARG), confirmed with Gaia. Moving groups can sometimes be further subdivided in smaller distinct groups. The Great Austral Young Association (GAYA) complex was found to be subdivided into the moving groups Carina, Columba, and Tucana-Horologium. The three Associations are not very distinct from each other, and have similar kinematic properties. Young moving groups have well known ages and can support the characterization of objects with hard-to-estimate ages, such as brown dwarfs. Members of nearby young moving groups are also candidates for directly imaged protoplanetary disks, such as TW Hydrae or directly imaged exoplanets, such as Beta Pictoris b or GU Psc b. Stellar streams A stellar stream is an association of stars orbiting a galaxy that was once a globular cluster or dwarf galaxy that has now been torn apart and stretched out along its orbit by tidal forces. Known kinematic groups Some nearby kinematic groups include: Local Association (Pleiades moving group) AB Doradus moving group Alpha Persei moving cluster Beta Pictoris moving group Castor moving group Corona Australis association Eta Chamaeleontis cluster Hercules-Lyra association Hercules stream Hyades Stream IC 2391 supercluster (Argus Association) Kapteyn group MBM 12 association TW Hydrae association Ursa Major Moving Group Wolf 630 moving group Zeta Herculis moving group Pisces-Eridanus stellar stream Tucana-Horologium association
Physical sciences
Stellar astronomy
null
25739045
https://en.wikipedia.org/wiki/Angstrom
Angstrom
The angstrom (; ) is a unit of length equal to m; that is, one ten-billionth of a metre, a hundred-millionth of a centimetre, 0.1 nanometre, or 100 picometres. The unit is named after the Swedish physicist Anders Jonas Ångström (1814–1874). It was originally spelled with Swedish letters, as Ångström and later as ångström (). The latter spelling is still listed in some dictionaries, but is now rare in English texts. Some popular US dictionaries list only the spelling angstrom. The unit's symbol is Å, which is a letter of the Swedish alphabet, regardless of how the unit is spelled. However, "A" or "A.U." may be used in less formal contexts or typographically limited media. The angstrom is often used in the natural sciences and technology to express sizes of atoms, molecules, microscopic biological structures, and lengths of chemical bonds, arrangement of atoms in crystals, wavelengths of electromagnetic radiation, and dimensions of integrated circuit parts. The atomic (covalent) radii of phosphorus, sulfur, and chlorine are about 1 angstrom, while that of hydrogen is about 0.5 angstroms. Visible light has wavelengths in the range of 4000–7000 Å. In the late 19th century, spectroscopists adopted of a metre as a convenient unit to express the wavelengths of characteristic spectral lines (monochromatic components of the emission spectrum) of chemical elements. However, they soon realized that the definition of the metre at the time, based on a material artifact, was not accurate enough for their work. So, around 1907 they defined their own unit of length, which they called "Ångström", based on the wavelength of a specific spectral line. It was only in 1960, when the metre was redefined in the same way, that the angstrom became again equal to metre. Yet the angstrom was never part of the SI system of units, and has been increasingly replaced by the nanometre ( m) or picometre ( m). History In 1868, Swedish physicist Anders Jonas Ångström created a chart of the spectrum of sunlight, in which he expressed the wavelengths of electromagnetic radiation in the electromagnetic spectrum in multiples of one ten-millionth of a millimetre (or .) Ångström's chart and table of wavelengths in the solar spectrum became widely used in the solar physics community, which adopted the unit and named it after him. It subsequently spread to the fields of astronomical spectroscopy, atomic spectroscopy, and then to other sciences that deal with atomic-scale structures. Early connection to the metre Although intended to correspond to  metres, that definition was not accurate enough for spectroscopy work. Until 1960 the metre was defined as the distance between two scratches on a bar of platinum-iridium alloy, kept at the BIPM in Paris in a carefully controlled environment. Reliance on that material standard had led to an early error of about one part in 6000 in the tabulated wavelengths. Ångström took the precaution of having the standard bar he used checked against a standard in Paris, but the metrologist Henri Tresca reported it to be so incorrect that Ångström's corrected results were more in error than the uncorrected ones. Cadmium line definition In 1892–1895, Albert A. Michelson and Jean-René Benoît, working at the BIPM with specially developed equipment, determined that the length of the international metre standard was equal to times the wavelength of the red line of the emission spectrum of electrically excited cadmium vapor. In 1907, the International Union for Cooperation in Solar Research (which later became the International Astronomical Union) defined the international angstrom as precisely 1/6438.4696 of the wavelength of that line (in dry air at 15 °C (hydrogen scale) and 760 mmHg under a gravity of 9.8067 m/s2). This definition was endorsed at the 7th General Conference on Weights and Measures (CGPM) in 1927, but the material definition of the metre was retained until 1960. From 1927 to 1960, the angstrom remained a secondary unit of length for use in spectroscopy, defined separately from the metre. Redefinition in terms of the metre In 1960, the metre itself was redefined in spectroscopic terms, which allowed the angstrom to be redefined as being exactly 0.1 nanometres. Angstrom star After the redefinition of the metre in spectroscopic terms, the Angstrom was formally redefined to be 0.1 nanometres. However, there was briefly thought to be a need for a separate unit of comparable size defined directly in terms of spectroscopy. In 1965, J.A. Bearden defined the Angstrom Star (symbol: Å*) as 0.202901 times the wavelength of the tungsten line. This auxiliary unit was intended to be accurate to within 5 parts per million of the version derived from the new metre. Within ten years, the unit had been deemed both insufficiently accurate (with accuracies closer to 15 parts per million) and obsolete due to higher precision measuring equipment. Current status Although still widely used in physics and chemistry, the angstrom is not officially a part of the International System of Units (SI). Up to 2019, it was listed as a compatible unit by both the International Bureau of Weights and Measures (BIPM) and the US National Institute of Standards and Technology (NIST). However, it is not mentioned in the 9th edition of the official SI standard, the "BIPM Brochure" (2019) or in the NIST version of the same, and BIPM officially discourages its use. The angstrom is also not included in the European Union's catalogue of units of measure that may be used within its internal market. Symbol For compatibility reasons, Unicode assigns a code point for the angstrom symbol, which is accessible in HTML as the entity &angst;, &#x0212B;, or &#8491;. However, version 5 of the standard already deprecates that code point and has it normalized into the code for the Swedish letter (HTML entity &Aring;, &#xC5;, or &#197;), which should be used instead. In older publications, where the Å glyph was unavailable, the unit was sometimes written as "A.U.". An example is Bragg's 1921 classical paper on the structure of ice, which gives the c- and a-axis lattice constants as 4.52 A.U. and 7.34 A.U., respectively. Ambiguously, the abbreviation "a.u." may also refer to the atomic unit of length, the bohr—about 0.53 Å—or the much larger astronomical unit (about ).
Physical sciences
Metric
Basics and measurement
27604245
https://en.wikipedia.org/wiki/Oral%20administration
Oral administration
Oral administration is a route of administration whereby a substance is taken through the mouth, swallowed, and then processed via the digestive system. This is a common route of administration for many medications. Oral administration can be easier and less painful than other routes of administration, such as injection. However, the onset of action is relatively low, and the effectiveness is reduced if it is not absorbed properly in the digestive system, or if it is broken down by digestive enzymes before it can reach the bloodstream. Some medications may cause gastrointestinal side effects, such as nausea or vomiting, when taken orally. Oral administration can also only be applied to conscious patients, and patients able to swallow. Terminology Per os (; P.O.) is an adverbial phrase meaning literally from Latin "through the mouth" or "by mouth". The expression is used in medicine to describe a treatment that is taken orally (but not used in the mouth such as, for example, caries prophylaxis). The abbreviation P.O. is often used on medical prescriptions. Scope Enteral administration includes: Buccal, dissolved inside the cheek Sublabial, dissolved under the lip Sublingual administration (SL), dissolved under the tongue, but due to rapid absorption many consider SL a parenteral route Oral (PO), swallowed tablet, capsule or liquid Enteral medications come in various forms, including oral solid dosage (OSD) forms: Tablets to swallow, chew or dissolve in water or under the tongue Capsules and chewable capsules (with a coating that dissolves in the stomach or bowel to release the medication there) Time-release or sustained-release tablets and capsules (which release the medication gradually) Powders or granules and oral liquid dosage forms: Teas Drops Liquid medications or syrups Facilitating methods Concomitant ingestion of water facilitates in swallowing tablets and capsules. If the substance has disagreeable taste, addition of a flavor may facilitate ingestion. Substances that are harmful to the teeth are preferably given through a straw.
Biology and health sciences
General concepts_2
Health
5133204
https://en.wikipedia.org/wiki/Embraer%20EMB%20314%20Super%20Tucano
Embraer EMB 314 Super Tucano
The Embraer EMB 314 Super Tucano (English: Super Toucan), also named ALX or A-29, is a Brazilian turboprop light attack aircraft designed and built by Embraer as a development of the Embraer EMB 312 Tucano. The A-29 Super Tucano carries a wide variety of weapons, including precision-guided munitions, and was designed to be a low-cost system operated in low-threat environments. In addition to its manufacture in Brazil, Embraer has set up a production line in Portugal through the company OGMA and in the United States in conjunction with Sierra Nevada Corporation for the manufacture of A-29s to export customers. Design and development During the mid-1980s, Embraer was working on the Short Tucano alongside a new version designated the EMB-312G1, carrying the same Garrett engine. The EMB-312G1 prototype flew for the first time in July 1986. However, the project was dropped because the Brazilian Air Force was not interested in it. Nonetheless, the lessons from recent combat use of the aircraft in Peru and Venezuela led Embraer to keep up the studies. Besides a trainer, it researched a helicopter attack version designated "helicopter killer" or EMB-312H. The study was stimulated by the unsuccessful bid for the US military Joint Primary Aircraft Training System program. A proof-of-concept prototype flew for the first time in September 1991. The aircraft features a fuselage extension with the addition of sections before and after of the cockpit to restore its center of gravity and stability, a strengthened airframe, cockpit pressurization, and stretched nose to house the more powerful PT6A-67R () engine. Two new prototypes with the PT6A-68A () engine were built in 1993. The second prototype flew for the first time in May 1993 and the third prototype flew in October 1993. The request for a light attack aircraft was part of the Brazilian government's Amazon Surveillance System project. This aircraft would fly with the R-99A and R-99B aircraft then in service and be used to intercept illegal aircraft flights and patrol Brazil's borders. The ALX project was then created by the Brazilian Air Force, which was also in need of a military trainer to replace the Embraer EMB 326GB Xavante. The new aircraft was to be suited to the Amazon region (high temperature, moisture, and precipitation; low military threat). The ALX was then specified as a turboprop engine plane with a long range and autonomy, able to operate night and day, in any meteorological conditions, and able to land on short airfields lacking infrastructure. In August 1995, the Brazilian Ministry of Aeronautics awarded Embraer a $50 million contract for ALX development. Two EMB-312Hs were updated to serve as ALX prototypes. These made their initial flights in their new configuration in 1996 and 1997, respectively. The initial flight of a production-configured ALX, further modified from one of the prototypes, occurred on 2 June 1999. The second prototype was brought up to two-seater configuration and performed its first flight on 22 October 1999. The changes had been so considerable that the type was given a new designation, the EMB-314 Super Tucano. The total cost of the aircraft development was quoted to be between US$200 million and US$300 million. The aircraft differs from the baseline EMB-312 Tucano trainer aircraft in several respects. It is powered by a more powerful Pratt & Whitney Canada PT6A-68C engine (compared to the EMB-312's powerplant); has a strengthened airframe to sustain higher g loads and increase fatigue life to 8,000–12,000 hours in operational environments; a reinforced landing gear to handle greater takeoff weights and heavier stores load, up to ; Kevlar armour protection; two internal, wing-mounted .50 cal. machine guns (with 200 rounds of ammunition each); capacity to carry various ordnance on five weapon hardpoints including Giat NC621 20 mm cannon pods, Mk 81/82 bombs, MAA-1 Piranha air-to-air missiles (AAMs), BLG-252 cluster bombs, and SBAT-70/19 or LAU-68A/G rocket pods on its underwing stations; and has a night-vision goggle-compatible "glass cockpit" with hands-on-throttle-and-stick (HOTAS) controls; provision for a datalink; a video camera and recorder; an embedded mission-planning capability; forward-looking infrared; chaff/flare dispensers; missile approach warning receiver systems and radar warning receivers; and zero-zero ejection seats. The structure is corrosion-protected and the side-hinged canopy has a windshield able to withstand bird strike impacts up to . In 1996, Embraer selected the Israeli firm Elbit Systems to supply the mission avionics for the ALX. For this contract, Elbit was chosen over GEC-Marconi and Sextant Avionique. The Israeli company supplies such equipment as the mission computer, head-up displays, and navigation and stores management systems. On 13 October 2010, the Super Tucano A-29B had passed the mark of 48,000 hours since 21 July 2005 on full-scale wing-fuselage structural fatigue tests, conducted by the Aeronautical Systems Division, part of the Aeronautics and Space Institute at the Structural Testing Laboratory. The tests involve a complex system of hydraulics and tabs that apply pressure to the aircraft structure, simulating air pressure from flying at varying altitudes. The simulation continued for another year to complete the engine-fatigue life test and crack-propagation studies for a damage-tolerance analysis program of conducted by Embraer and the Aeronautics and Space Institute. Embraer developed an advanced training and support system suite called Training Operational Support System (TOSS) an integrated computational tool composed of four systems: computer-based training enabling the student to rehearse the next sortie on a computer simulation; an aviation mission planning station, which uses the three-dimensional (3D) visuals to practice planned missions and to check intervisibility between aircraft and from aircraft and other entities; a mission debriefing station employing real aircraft data to play back missions for review and analysis; and a flight simulator. MPS and MDS was enhanced with MAK's 3D visualization solution to support airforces pre-existing data, including GIS, Web-based servers and a plug-in for custom terrain formats. In 2012, Boeing Defense, Space & Security was selected to integrate the Joint Direct Attack Munition and Small Diameter Bomb to the Super Tucano. In 2013, Embraer Defense and Security disclosed that its subsidiary, OrbiSat, was developing a new radar for the Super Tucano. A Colombian general disclosed that the side-looking airborne radar will be able to locate ground targets smaller than a car with digital precision. In April 2023, the manufacturer announced the A-29N, a variant intended for NATO nations. The A-29N will include NATO-required equipment, data link communications and be fitted for single-pilot operation. Available simulators used for training will incorporate virtual reality, augmented reality and mixed reality technology. In November 2024, the Brazilian Air Force announced a contract with Embraer for the modernization of 68 aircraft to the new A-29M stardard, which includes some capabilities of fourth and fifth generation aircraft such as the inclusion of a data link, new digital head-up display, expansion of the range of guided weapons, integration of a helmet-mounted display, the installation of chaff and flares, laser rangefinder and finally a wide-area display similar to those of the new Brazilian Saab JAS-39 Gripen fighters. Operational history Afghanistan In 2011, the Super Tucano was declared the winner of the US Light Air Support contract competition over the Hawker Beechcraft AT-6B Texan II. The contract was cancelled in 2012 citing Hawker Beechcraft's appeal when its proposal was disqualified during the procurement process, but rewon in 2013. Twenty of these light attack aircraft were purchased for the Afghan Air Force (AAF). The first four aircraft arrived in Afghanistan in January 2016, with a further four due before the end of 2016. Combat-ready Afghan A-29 pilots graduated from training at Moody Air Force Base, Georgia, and returned to Afghanistan to represent the first of 30 pilots trained by the 81st Fighter Squadron at Moody AFB. A fleet of 20 A-29s would be in place by 2018, according to a senior U.S. defense official. The Pentagon purchased the Super Tucanos in a $427 million contract with Sierra Nevada Corp. and Embraer, with the aircraft produced at Embraer's facility on the grounds of Jacksonville International Airport in Jacksonville, Florida. The first four aircraft arrived at Hamid Karzai International Airport on 15 January 2016. Prior to the A-29's delivery, the Afghan Air Force lacked close air support aircraft other than attack helicopters. In 2017, the AAF conducted roughly 2,000 airstrike sorties, about 40 a week. The AAF had a record high in October with more than 80 missions in a single week. By March 2018, the AAF had 12 A-29s in service. On 22 March 2018, the AAF deployed a GBU-58 Paveway II 250 lb (113.4 kg) bomb from an A-29 in combat, marking the first time the service had dropped a laser-guided weapon against the Taliban. Fall of Kabul In August 2021, during the 2021 Taliban offensive and the Fall of Kabul, some Afghan pilots fled the country, taking an unknown number of aircraft, including A-29s, with them. An Afghan Air Force A-29 crashed in Uzbekistan's Surxondaryo Region; two pilots ejected and landed with parachutes. Initially it was reported shot down by Uzbekistan air defenses, then the Prosecutor General's office in Uzbekistan issued a statement saying that an Afghan military plane had collided mid-air with an Uzbekistan Air Force MiG-29, finally it retracted the statement about the mid-air collision. At least one Super Tucano was captured by the Taliban in the Mazar-i-Sharif International Airport. Brazil In August 2001, the Brazilian Air Force awarded Embraer a contract for 76 Super Tucano / ALX aircraft with options for a further 23. A total of 99 aircraft were acquired from a contract estimated to be worth U$214.1 million; 66 of these aircraft are two-seater versions, designated A-29B. The remaining 33 aircraft are the single-seat A-29 ALX version. The first aircraft was delivered in December 2003. By September 2007, 50 aircraft had entered service. The 99th, and last, aircraft was delivered in June 2012. Sivam programme One of the aircraft's main missions is border patrol under the Sivam programme, particularly to act against drug trafficking activities. On 3 June 2009, two Brazilian Air Force A-29s, guided by an Embraer E-99, intercepted a Cessna U206G inbound from Bolivia in the region of Alta Floresta d'Oeste; after exhausting all procedures, one of the A-29s fired a warning shot from its 12.7 mm machine guns, after which the Cessna followed the A-29s to Cacoal airport. This incident was the first use of powers granted under the Shoot-Down Act, which was enacted in October 2004 to legislate for the downing of illegal flights. A total of 176 kg of pure cocaine base paste, enough to produce almost a ton of cocaine, was discovered on board the Cessna; the two occupants attempted a ground escape but were arrested by federal police in Pimenta Bueno. Operation Ágata On 5 August 2011, Brazil started Operation Ágata, part of a major "Frontiers Strategic Plan" launched in June, with almost 30 continuous days of rigorous military activity in the region of Brazil's border with Colombia; it mobilized 35 aircraft and more than 3,000 military personnel of the Brazilian Army, Brazilian Navy, and Brazilian Air Force surveillance against drug trafficking, illegal mining and logging, and trafficking of wild animals. A-29s of 1 / 3º Aviation Group (GAV), Squadron Scorpion, launched a strike upon an illicit airstrip, deploying eight 230 kg (500 lb) computer-guided Mk 82 bombs to render the airstrip unusable. Multiple RQ-450 UAVs and several E-99s were assigned for night operations to locate remote jungle airstrips used by drug smuggling gangs along the border. The RQ-450s located targets for the A-29s, allowing them to bomb the airstrips with a high level of accuracy using night vision systems and computer systems calculating the impact points of munitions. Operation Ágata 2 On 15 September 2011, Brazil launched the Operation Ágata 2 on the borders with Uruguay, Argentina, and Paraguay. Part of this border is the infamous Triple Frontier. A-29s from Maringá, Dourados, and Campo Grande, and Brazilian upgraded Northrop F-5 Tiger II/F-5EMs from Canoas, intercepted a total of 33 aircraft during Operation Ágata 2 in this area. Brazilian forces seized 62 tons of narcotics, made 3,000 arrests, and destroyed three illicit airstrips, while over 650 tons of weapons and explosives have been seized. Operation Ágata 3 On 22 November 2011, Brazil launched the Operation Ágata 3 on the borders with Bolivia, Peru, and Paraguay. It involved 6,500 personnel, backed by 10 ships and 200 land patrol vehicles, in addition to 70 aircraft, including fighter, transport, and reconnaissance aircraft; it was the largest Brazilian coordinated action involving the Army, Navy, and Air Force against illegal trafficking and organized crime, along a border strip of almost 7,000 km. A-1 (AMX), Northrop F-5 Tiger II/ F-5EM and A-29s from Tabatinga, Campo Grande, Cuiabá, Vilhena, and Porto Velho were employed in defending air space, supported by airborne early warning and control E-99, equipped with a 450-km-range radar capable of detecting low-flying aircraft, and R-99, remote sensing and surveillance. On 7 December 2011, Brazilian Ministry of Defence informed that drug seizures were up by 1,319% over the last six months, compared to prior six months. Chile In August 2008, the Chilean Air Force signed a contract valued at $120 million for 12 A-29Bs. The contract includes a broad integrated logistic support package and an advanced training and operation support system (TOSS), covering not only the aircraft, but also an integrated suite for ground support stations. The FACH's TOSS consists of three systems: a mission planning station in which instructor and student program all phases of flight, setting the various parameters of each phase along with navigation, communications, goals, and simulations; a mission debriefing station empowering students with the ability to review all and each flight aspects and phases, enabling to look at the errors and correct them for their next mission; and a flight simulator. The first four A-29Bs arrived in December 2009 while further deliveries took place in the following year. They are based at Los Cóndores Air Base (45 km from Iquique) and are used for tactical instruction at the 1st Air Brigade for the Aviation Group #1, the fully digital cockpit allows students to do a smooth transition between the T-35 Pillán (basic trainer) and the F-16. In 2018, six additional A-29B, along with ground support equipment, arrived; four more units were received two years later. Colombia A total of 25 Super Tucanos (variant AT-29B) were purchased by the Colombian Air Force in a US$234 million deal, purchased directly from Embraer. On 14 December 2006, the first three aircraft arrived to the military airfield of CATAM in Bogotá; two more were delivered later that month, ten more in the first half of 2007, and the rest in June 2008. On 18 January 2007, a squadron of Colombian Air Force Super Tucanos launched the first-ever combat mission of its type, attacking FARC positions in the jungle with Mark 82 bombs. This attack made use of the Super Tucano's constantly computed impact point capability; the aircraft's performance in action was a reported success. On 11 July 2012, the first Super Tucano was lost near Jambalo during an anti-FARC operation; rebels claimed they shot it down with a .50 caliber (12.7 mm) machine gun, but the Colombian Air Force challenged the rebel group's claim after inspecting the wreckage. In 2008, during "Operation Phoenix", a Colombian Air Force Super Tucano used Griffin laser-guided bombs to destroy a guerrilla cell inside Ecuador and kill the second-in-command chief of FARC, Raúl Reyes. This event led to a diplomatic break between the two countries. On 21 September 2010, Operation Sodoma in the Meta department began, 120 miles south of the capital Bogotá. FARC commander Mono Jojoy was killed in a massive military operation on 22 September, after 25 EMB-314s launched seven tonnes of explosives on the camp, while some 600 special forces troops descended by rope from helicopters, opposed by 700 guerrillas; 20 guerrillas died in the attack. On 2 October 2010, during Operation Darién, Super Tucanos used infrared cameras to spot and bombard the FARC 57th front in the Chocó Department, just a kilometer away from the Panama border. Five rebels, including several commanders, were killed. On 15 October 2011, Operation Odiseo started with a total of 969 members of the Colombian armed forces. A total of 18 aircraft participated in Operation Odiseo. On 4 November 2011, five Super Tucanos dropped 1000 lb (450 kg) and 250 lb (135 kg) bombs, plus high-precision smart bombs. This operation ended with the death of the leader of the Revolutionary Armed Forces of Colombia (Fuerzas Armadas Revolucionarias de Colombia, FARC), Alfonso Cano. It was biggest blow in the history of the guerrilla organization. At dawn of 22 February 2012, EMB-314s identified the camp of FARC's 57th Front, north of Bojayá near the border with Panama. In Operation Frontera, Super Tucanos dropped two high-precision bombs, destroying the camp and killing six FARC rebels, including Pedro Alfonso Alvarado (alias "Mapanao"), who was responsible for the Bojayá massacre in 2002, in which 119 civilians were killed. Espada de Honor War Plan The Espada de Honor War Plan was an aggressive Colombian counterinsurgency strategy that aimed to dismantle FARC's structure, both militarily and financially. It targeted FARC leadership focusing on eliminating the 15 most powerful economic and military fronts. During Operacion Faraón, at the dawn of 21 March 2012, five Super Tucanos bombarded the FARC's 10th Front guerrilla camp in Arauca, near the Venezuelan border, killing 33 rebels. Five days later, in Operation Armagedón, nine Super Tucanos from Apiay Air Base attacked the FARC's 27th front camp in Vista Hermosa, Meta, using coordinates received from a guerrilla informant recruited by the police intelligence, launching 40 guided 500-lb bombs within three minutes, destroying the camp and killing 36 rebels. In late May, Super Tucanos bombarded a National Liberation Army camp located in rural Santa Rosa at Bolívar Department. On 31 May 2012, a bombardment over the Western Front of the ELN at an inhospitable area of the Chocó Department killed seven rebels. On 6 June 2012, during a minute and half bombardment over FARC's 37th front located in northern Antioquia Department, five Super Tucanos dropped 250-kg bombs, killing eight rebels. In September, Super Tucanos provided reconnaissance and close air support during an "Omega" operation, during which seven terrorists were gunned down and four were captured, including "Fredy Cooper", the 7th front's leader of the Public Order Company. On 5 September 2012, "Danilo Garcia", leader of the FARC's 33rd Front, was killed in a bombing raid; Danilo was considered "the right hand of supreme FARC leader alias Timochenko". Intelligence indicated that the bodies of 15 guerrillas may have been buried in the bombing. Eight A-29s carried out an air strike on 27 September during Operación Saturno at the FARC's 37th front camp in the northwest of Antioquia Department, resulting in the death of Efrain Gonzales Ruiz, "Pateñame", leader of the 35th and 37th fronts, and 13 others. In April 2013, two Super Tucanos bombarded the FARC's 59th front fort in Serranía del Perijá municipality Barrancas, La Guajira. Dominican Republic In August 2001, Embraer announced the signing of a contract with the Dominican Republic for 10 Super Tucanos, to use for pilot training, internal security, border patrol and counter-narcotics trafficking missions. The order was reduced to eight aircraft in January 2009, for a total amount of US$93 million. The first two aircraft were delivered on 18 December 2009, three arrived in June 2010, and the remaining three in October 2010. In February 2011, Dominican Republic Air Force Chief of Operations Col. Hilton Cabral stated: "since the introduction of the Super Tucano aircraft and ground-based radars, illicit air tracks into the Dominican Republic had dropped by over 80 percent." In August 2011, the Dominican Air Force said that since taking delivery of the Super Tucanos in 2009, it has driven away drug flights to the point that they no longer enter the country's airspace. In May 2012, the Dominican president Leonel Fernández gave a cooperative order for the armed forces to support a fleet of Super Tucanos for the antidrug fight on Haiti. Ecuador The Ecuadorian Air Force operates 18 Super Tucanos; they are established at Manta Air Base in two squadrons: 2313 "Halcones" (used for border surveillance and flight training) and 2311 "Dragones" (used for counterinsurgency). Ecuadorian Super Tucanos use the PT-6A-68A (1,300 shp) engine. On 23 March 2009, Embraer announced that negotiations over a nine-month-old agreement with the Ecuadorian Air Force had been completed. The deal covers the supply of 24 Super Tucanos to replace Ecuador's aging fleet of Vietnam-era Cessna A-37 Dragonfly strike aircraft, and help reassert control over the country's airspace. In May 2010, after receiving its sixth Super Tucano under a $270 million contract, Ecuador announced a reduction in its order from 24 to 18 Super Tucanos to release funds to buy some used South African Air Force Denel Cheetah C fighters. By cutting its order for the EMB-314, the Defence Ministry says the accrued savings would better allow it to bolster the air force's flagging air defence component. Honduras On 3 September 2011, the head of the Honduran Air Force (Fuerza Aérea Hondureña, or FAH), said that Honduras was to procure four Super Tucanos. On 7 February 2012, the Honduran government informed the Brazilian Trade Ministry of its interest in acquiring a large number of Super Tucanos. However, due to the economic situation, the government was forced to repair their aging aircraft inventory, instead of purchasing eight EMB-314s. On 17 October 2014, the Ministry of Foreign Affairs and International Cooperation announced the go-ahead for acquiring two new A-29s by the FAH following approval from the country's National Council for Security and Defence. As part of the deal, six of the FAH's surviving EMB-312A Tucanos, acquired in 1984, will be refurbished and upgraded by Embraer. Originally operated only by the Academia Militar de Aviación at Palmerola for training, they have recently been armed for counter-narcotics missions. Just three were airworthy as the Brazilian deal was signed for the aircraft to be upgraded and the other three be made airworthy again. Together with the two newly acquired Super Tucanos, this will boost efforts to maintain security within the country. Indonesia In January 2010, Indonesian Air Force commander Air Marshal Imam Sufaat stated that Indonesia had split the competition, designating the Super Tucano as their preferred OV-10 replacement. Indonesia signed a memorandum of understanding with Embraer at the Indo Defense 2010 exhibition in Jakarta. Indonesia initially ordered eight Super Tucanos, including ground-support stations and a logistics package, with an option for another eight on the same terms; the first were scheduled to arrive in 2012. Defense Minister Purnomo Yusgiantoro added that state aircraft maker PT Dirgantara Indonesia would perform maintenance work, and may also manufacture some components. While Indonesia could have made a unified choice to replace its OV-10 light attack and BAE Hawk Mk.53 trainer fleets with a multirole jet, the demands of forward air control and counterinsurgency wars give slower and more stable platforms an advantage. On 10 July 2012, Indonesia ordered a second set of eight Super Tucanos, along with a full flight simulator, bringing their order total to 16. In August 2012, Indonesia received the first four planes from the initial batch at a ceremony held in its facility in Gavião Peixoto, São Paulo, Brazil. Deliveries of the second batch of Super Tucanos were delayed by over seven months. In September 2014, the second batch left Brazil on their ferry flight to Malang Abdul Rachman Saleh Air Base in East Java; they will be based at the Malang air base on Indonesia's Java island and operated by Skadron Udara 21 as part of the 2nd Wing. The final four A-29Bs left Brazil on 15 February 2016, passing through Malta-Luqa International Airport on 21 February and ultimately arriving at Indonesia's Malang Abdul Rachman Saleh Air Force Base on 29 February 2016. One aircraft was lost in a crash on 10 February 2016, and a further two in crashes on 16 November 2023. Lebanon The Pentagon first proposed to provide to Lebanon a contract for 10 EMB-314s in 2010. Six Tucanos with 2,000 advanced precision-kill weapon systems went to Lebanon via the US LAS program, but financed by Saudi Arabia at million. The first two were delivered in October 2017, with four more in June 2018. Mauritania Negotiations for the acquisitions of Super Tucanos started in December 2011. On 28 March 2012 at Chile's FIDAE defense and air show, Embraer announced sales of undisclosed numbers of aircraft to Mauritania. On 19 October 2012, Embraer delivered the first EMB-314, fitted with a FLIR Safire III infrared turret for border surveillance operations. Nigeria In November 2013, Nigeria showed interest in acquiring twelve new Super Tucanos. Three aircraft were bought from the Brazilian Air Force inventory in 2017. In April 2017, the United States indicated that it would be moving forward with a deal to sell up to 12 of the aircraft for up to million, ending delays that had been caused by human-rights concerns. In August 2017, the US Department of State approved of the sale of 12 aircraft and associated supplies and weapons. In November 2018, Nigeria purchased 12 Super Tucanos from Sierra Nevada for $329 million, all of which can be fitted with forward-looking infrared systems. They were delivered to Nigeria in October 2021. Philippines The Philippine Air Force (PAF) considered the acquisition of six Super Tucanos to replace the aging OV-10 Bronco. In late 2017, Defense Secretary Delfin Lorenzana signed the contract to purchase six for the Close Air Support Aircraft acquisition project as included in the AFP Modernization Program's Horizon 1 phase. On 13 October 2020, six A-29Bs were turned over to the PAF. They were inducted with the 16th Attack Squadron, 15th Strike Wing. Defense Secretary Delfin Lorenzana was reportedly considering buying six more A-29Bs. By 2024, the PAF intends to operate 24 aircraft across two squadrons. 12 aircraft are to be delivered by 2022, and six by 2024, allowing the PAF to operate close air support, intelligence, surveillance, reconnaissance, and light attack missions. On 9 December 2021, PAF A-29Bs conducted airstrikes on terrorist encampments as part of Oplan Stinkweed in Palimbang, Sultan Kudarat. Portugal In 2021, Portugal showed interest in acquiring at least 10 aircraft. In 2022, the Portuguese Air Force reportedly proposed to purchase 12 second-hand A-29s from Brazilian Air Force reserves. In August 2022 the Chief of Staff of the Air Force stated the service's interest in acquiring propeller aircraft for combat missions. By July 2024, it was reported that negotiations were underway for new-build A-29Ns. In December 2024, it was announced that the Força Aérea Portuguesa would acquire twelve A-29N Super Tucanos. United States Civilian One Super Tucano was purchased by a subsidiary of Blackwater Worldwide, an American private military contractor. It lacked the normal wing-mounted machine guns. In 2012, that aircraft was sold on to Tactical Air Support, Inc., of Reno, Nevada. Military Special operations In 2008, the U.S. Navy began testing the Super Tucano at the behest of the U.S. Special Operations Command for its potential use to support special warfare operations, giving it the official U.S. designation A-29B. Islamic Republic of Afghanistan In 2009, the Super Tucano was offered in a U.S. Air Force competition for 100 counterinsurgency aircraft. On 12 April 2010, Brazil signed an agreement to open negotiations for the acquisition of 200 Super Tucanos by the U.S. On 16 November 2011, the AT-6 was excluded from the LAS program, effectively selecting the Super Tucano. According to GAO: "the Air Force concluded that HBDC had not adequately corrected deficiencies in its proposal... that multiple deficiencies and significant weaknesses found in HBDC's proposal make it technically unacceptable and results in unacceptable mission capability risk". Hawker Beechcraft's protest against its exclusion was dismissed. While the contract award was disputed, a stop-work was issued in January 2012. For this procurement, the avionics were supplied by Elbit Systems of America. Sierra Nevada, the US-based prime contractor built the Super Tucano in Jacksonville, Florida. The 81st Fighter Squadron, based at Moody Air Force Base, was reactivated on 15 January 2015 and received the A-29s and provided training to pilots and maintainers from the Afghan Air Force. They were turned over to the Afghans in batches from December 2018. Light attack experiment In August 2017, the US Air Force conducted the "Light Attack Experiment" to evaluate potential light attack aircraft. Following this, it decided to continue experimenting with two non-developmental aircraft, the Textron Aviation AT-6B Wolverine derivative of the T-6 Texan II and the Sierra Nevada/Embraer A-29 Super Tucano. Tests conducted at Davis-Monthan Air Force Base, Arizona between May and July 2018, examined logistics requirements, weapons and sensor issues, and future interoperability with partner forces. The Air Force expects to have the information it needs to potentially buy light attack aircraft in a future competition, without conducting a combat demonstration, based on data collected during the first round of the experiment and future data anticipated to be collected in the next phase of experimentation. The A-29 had a fatal crash while over the Red Rio Bombing Range, White Sands Missile Range. Paraguay In July 2024, Embraer and the Paraguayan Air Force announced the acquisition of six Super Tucanos, with deliveries planned to begin in 2025. Uruguay In July 2024, Embraer and the Uruguayan Air Force announced the acquisition of six Super Tucanos, with deliveries planned to begin in 2025. Potential operators Bolivia Embraer reportedly offered the Super Tucano to the Bolivian Air Force. Equatorial Guinea Equatorial Guinea was said to be interested in purchasing the Super Tucano. Guatemala In August 2011, the Guatemalan Air Force requested credit approval of $166 million to buy six EMB-314s, control centers, radar, and equipment, in the context of a programme named "C4I". In October 2012, the Guatemalan Congress approved a loan for the C4I programme, including the purchase of six A-29s, to be granted by Brazilian and Spanish banks (BNDES and BBVA). The deal was finalized in April 2013. The first two aircraft were expected to arrive in April 2014, followed by two units in 2015 and two more in 2016. However, the president of Guatemala cancelled the order in November 2013. In January 2015, the Guatemalan defence minister disclosed that his country was looking at purchasing two aircraft from Embraer. Libya The Libyan government is interested in buying up to 24 Super Tucanos. Mozambique Brazil planned to donate three EMB-312s for Mozambique Air Force, which may also acquire three Super Tucanos. In 2016, the donation deal was canceled by the Brazilian government. Peru In March 2011, a Brazilian federal representative spoke on the Unasur treaty, stating that it could promote the surveillance integration in the Amazon Basin and facilitate the sale of 12 Super Tucanos and upgrade kits for 20 Peruvian EMB-312s. In November 2011, Peru's defence minister announced the Super Tucano purchase was suspended in favor of the Korean KT-1. On 14 February 2012, Brazil's Ministry of Defence said Peru is considering buying ten Super Tucanos. However, in November 2012, a government-to-government contract was signed for 20 KT-1s. In 2012, the governments of Peru and Brazil restarted negotiations for the acquisition of 12 A-29s to replace A-37 Dragonflys that are due to withdraw in 2017. Suriname Suriname is interested in purchasing between two and four Super Tucanos for light attack roles. Thailand Embraer has also quoted Thailand as a potential customer for the type. UAE In September 2010, it was announced that Brazil and the United Arab Emirates were working a deal that includes sales of Super Tucanos. It was reported in early 2015 that the UAE is negotiating with Embraer the purchase of 24 Super Tucanos, the deal would include six aircraft from Brazilian Air Force inventory for immediate delivery. Since then an Emirati company, Callidus, bought a Brazilian company, Novaer, founded by an engineer involved in the Tucano project, and started a project for an alternative aircraft strongly resembling it, the Calidus B-250. Ukraine In August 2019, a Ukrainian military delegation visited Embraer's military division in São Paulo and flew the Super Tucano. In October 2019, the President of Ukraine, Volodymyr Zelensky, in a meeting with Brazilian President Jair Bolsonaro, informed that his country would buy the Super Tucano. In December 2022, the Brazilian media reported a Ukrainian interest in the Super Tucano, to equip its air force for the Russo-Ukrainian War; however, the sale was blocked by the Bolsonaro administration. A diplomatic effort by the United States to persuade the president-elect of Brazil, Luiz Inácio Lula da Silva, to unblock the deal, has been reported. Missed contracts Bolivia After the U.S. ban on Czech aircraft Aero L-159 Alca export on 7 August 2009, the Bolivian Defense Minister said they were considering six aircraft from Brazil or China with comparable role as the L-159. On 9 October 2009, it was announced that China would manufacture six K-8 for Bolivia, to be used for antidrug operations, at a price of $9.7 million per aircraft. El Salvador In November 2010, the President of the Legislative Defense Committee of El Salvador stated they would purchase an estimated 10 EMB-314s. It was postponed in February 2011 by lack of funds. In 2013, the El Salvador Air Force acquired 10 Cessna A-37 retired from Chilean Air Force. Iraq In January 2015 a report in Jane's Defence Weekly said the Iraqi Air Force would receive 24 Super Tucanos, six directly from Brazilian Air Force stocks, and some from an order placed by the United Arab Emirates. Senegal In September 2012, Senegal was reportedly in a procurement process with Embraer. In April 2013, the Brazilian minister of Defence disclosed that Senegal was the 4th African nation to order the Super Tucano, in the following day, Embraer confirmed the order, which included a training system for pilots and mechanics (TOSS) in Senegal, bringing autonomy to that country's Air Force in preparing qualified personnel. However, the deal was not finalized and Senegal opted for four Korean KT-1s. Sweden Sweden proposed replacing its Saab 105 trainer aircraft with Super Tucanos, if Brazil chose to buy the Gripen NG. In May 2021, the Swedish Armed Forces announced that it chose Grob G 120TP as the new trainer and it will enter service in 2023. United Kingdom Elbit Systems and Embraer offered the EMB-314 for the United Kingdom's basic trainer contest. However, the Beechcraft T-6C Texan II formed part of the preferred bid for the requirement in October 2014. Venezuela In February 2006, a 36-unit sale for Venezuela fell through because it was thought the U.S. would block the transfer of U.S.-built components. Venezuelan President Hugo Chávez claimed the U.S. had pressured Brazil not to sign the contract. Operators Afghan Air Force – 26 A-29s ordered, deliveries took place from 2016 to late 2020. They were built by Sierra Nevada Corporation and Embraer in Jacksonville, Florida, and supplied to Afghanistan via the U.S. Air Force's Light Air Support (LAS) program. The first was delivered to the U.S. service in September 2014. The first four A-29s arrived at Hamid Karzai International Airport in Kabul on 15 January 2016. After the fall of Kabul to the Taliban, it is unclear if A-29s will continue to be operated by Afghans. National Air Force of Angola – six aircraft ordered. Deliveries were scheduled to begin in early 2012; but the first three were delivered on 31 January 2013. 8th Training Squadron, 24th Training Regiment at Menongue Airport Brazilian Air Force – 99 aircraft (33 A-29A & 66 A-29B). At least four aircraft have been lost. 1st Squadron of the 3rd Aviation Group (1º/3º GAv) "Esquadrão Escorpião" (Scorpion Squadron) 2nd Squadron of the 3rd Aviation Group (2º/3º GAv) "Esquadrão Grifo" (Griffon Squadron) 3rd Squadron of the 3rd Aviation Group (3º/3º GAv) "Esquadrão Flecha" (Arrow Squadron) 2nd Squadron of the 5th Aviation Group (2º/5º GAv) "Esquadrão Joker" (Joker Squadron) The Aerial Demonstration Squadron "Esquadrilha da Fumaça" Smoke Squadron (EDA) Burkina Faso Air Force – 3 aircraft delivered in September 2011 of version A-29B. Combat Squadron (Escadrille de Chasse) located at Ouagadougou Air Base Chilean Air Force 22 aircraft (12 received in 2009, 6 in 2018 and 4 in 2020). Grupo de Aviacion N°1 located at Base aérea "Los Cóndores" in Iquique Colombian Aerospace Force – 25 aircraft, introduced between 2006 and 2008. At least one aircraft crashed, claimed shot down by FARC. 211 Combat Squadron "Grifos" of the Twenty-first Combat Group at the Captain Luis F. Gómez Niño Air Base 312 Combat Squadron "Drakos" of the Thirty-first Combat Group at the Major General Alberto Pauwels Rodríguez Air Base at Malambo, near Barranquilla 611 Combat Squadron of the Sixty-first Combat Group at the Captain Ernesto Esguerra Cubides Air Base Dominican Air Force – 8 aircraft Escuadrón de Combate "Dragones" at the San Isidro Air Base Ecuadorian Air Force – 18 aircraft, all delivered by 2011. Ala de Combate No.23, "Luchando Vencerás", Base Aérea Eloy Alfaro, Manta Escuadrón de Combate 2313 "Halcones" Escuadrón de Combate 2311 "Dragones" Ghana Air Force – 5 aircraft ordered in 2015. The total value of the contract was $88million with a loan from BNDES, which also includes logistics support and training for pilots and mechanics in Ghana. The first aircraft were expected to arrive in late 2016, and will be used as advanced training, border surveillance and internal security missions. Ghana's Air Force plans to acquire four more A-29s with light attack, reconnaissance and training capabilities; if finalized, the deal will increase Ghana's A-29 fleet to nine. Until 2024, no deliveries have been made and Embraer and the Sierra Nevada Corporation demonstrated their A-29 Super Tucano close air support, reconnaissance and trainer aircraft to the Ghana Air Force on 19 February 2024, at the Accra Air Force Base. This was done with their demonstrator aircraft PT-ZTU.<ref>[https://www.defenceweb.co.za/aerospace/aerospace-aerospace/embraer-and-snc-demonstrate-super-tucano-to-ghana-air-force/ "Embraer and SNC demonstrate Super Tucano"] Embraer and SNC demonstrate Super Tucano to Ghana Air Force , Retrieved 8 August 2024.</ref> Honduran Air Force – 2 aircraft ordered in 2014. Indonesian Air Force – 16 aircraft ordered & delivered, one lost in a crash February 2016, a further two lost in crashes in November 2023. The first four aircraft of the first batch of eight were delivered as of August 2012., the delivery of the second batch of four aircraft was delayed till September 2014. A total of 16 were ordered in 2011 with deliveries taking place in 2012, 2014, 2015 and 2016. In March 2012, Indonesian Ministry of Defense informed the possibility of a future joint production, further modernization and sales in the Asia-Pacific region. Air Squadron 21 at the Lanud Abdul Rachman Saleh air base Lebanese Air Force – 6 A-29s ordered, all six delivered by May 2018. Operating in the 7th Squadron. Mali Air Force – 4 A-29 delivered in July 2018. Six originally ordered but due to financial issues the order was reduced to four aircraft. Mauritanian Air Force – 4 aircraft ordered, received two aircraft as of December 2012, two more aircraft on order. Nigerian Air Force – 12 aircraft on order. First batch with 6 aircraft delivered on 22 July 2021, and the delivery was completed with the arrival of the final batch to Nigeria in October 2021. Paraguayan Air Force – 6 aircraft ordered in July 2024, with deliveries planned to begin in 2025. Philippine Air Force – 6 aircraft delivered on 13 October 2020. Another 6 are planned to be ordered. 16th Attack Squadron "Eagles" Portuguese Air Force – On December 12th, 2024 Portugal has approved the acquisition of 12 A-29N Super Tucano aircraft in a 200-million-euro deal. These Super Tucano variants are configured according to NATO standards, with its avionics to include single-pilot operation and modern data link functions. This version is designed for Close Air Support (CAS), ISR, and advanced training. Acquisition addresses gaps left by the retired Portuguese Alpha Jets. Turkmen Air Force – Total order quantity not disclosed. 5 aircraft delivered in 2020–21. EP Aviation – part of Academi (formerly Blackwater) – at least one twin-seater variant for pilot training (delivered in February 2008), possible further orders for counter-insurgency role."Report: Blackwater Worldwi de Purchases Brazilian-Made Fighter Plane." Fox News, 2 June 2008. Later sold in 2010 to Tactical Air Support in Reno, NV. United States Navy leased an aircraft for testing, as part of the Imminent Fury program. United States Air Force - from 3 to 6 aircraft operated by United States Air Force Special Operations Command. Operated by Air Force Special Operations Command. Delivered in 2021. Transferred to the U.S. Air Force Test Pilot School in 2024. Uruguayan Air Force – 6 aircraft ordered in July 2024, with deliveries planned to begin in 2025. Accidents On 10 February 2016, an Indonesian Air Force Embraer EMB-314 Super Tucano crashed in Malang, East Java, on suburb area near Abdul Rachman Saleh Air Base. The aircraft (TT-3108) was on a routine test flight. Both pilots and two civilians died in the accident. On 15 August 2021, an Embraer 314 aircraft belonging to the Afghan Armed Forces crashed in the Sherabad district of the Surkhandarya region of the Republic of Uzbekistan. On 16 November 2023, two Indonesian Air Force Embraer EMB-314 Super Tucano crashed on the slopes of Mount Bromo, near Keduwung Village, Puspo District, Pasuruan, East Java. The aircraft (TT-3103 and TT-3111) were part of four-aircraft formation with another two Super Tucanos, and on training flight under cloudy weather condition. The four aircraft were flying in a box formation when they suddenly encountered heavy clouds, obstructing visibility; TT-3103 and TT-3111 allegedly collided with mountain slope when the four aircraft broke the formation and attempted to get out of the clouds. Another two Super Tucanos landed safely on Abdul Rachman Saleh Air Base. All four pilots of both planes died in the accident. Aircraft on display EMB 314B Super Tucano FAB-5900 – Brazilian Air Force – Memorial Aeroespacial Brasileiro, São José dos Campos Specifications (EMB 314 Super Tucano)
Technology
Specific aircraft
null
5139283
https://en.wikipedia.org/wiki/Geometric%20albedo
Geometric albedo
In astronomy, the geometric albedo of a celestial body is the ratio of its actual brightness as seen from the light source (i.e. at zero phase angle) to that of an idealized flat, fully reflecting, diffusively scattering (Lambertian) disk with the same cross-section. (This phase angle refers to the direction of the light paths and is not a phase angle in its normal meaning in optics or electronics.) Diffuse scattering implies that radiation is reflected isotropically with no memory of the location of the incident light source. Zero phase angle corresponds to looking along the direction of illumination. For Earth-bound observers, this occurs when the body in question is at opposition and on the ecliptic. The visual geometric albedo refers to the geometric albedo quantity when accounting for only electromagnetic radiation in the visible spectrum. Airless bodies The surface materials (regoliths) of airless bodies (in fact, the majority of bodies in the Solar System) are strongly non-Lambertian and exhibit the opposition effect, which is a strong tendency to reflect light straight back to its source, rather than scattering light diffusely. The geometric albedo of these bodies can be difficult to determine because of this, as their reflectance is strongly peaked for a small range of phase angles near zero. The strength of this peak differs markedly between bodies, and can only be found by making measurements at small enough phase angles. Such measurements are usually difficult due to the necessary precise placement of the observer very close to the incident light. For example, the Moon is never seen from the Earth at exactly zero phase angle, because then it is being eclipsed. Other Solar System bodies are not in general seen at exactly zero phase angle even at opposition, unless they are also simultaneously located at the ascending or descending node of their orbit, and hence lie on the ecliptic. In practice, measurements at small nonzero phase angles are used to derive the parameters which characterize the directional reflectance properties for the body (Hapke parameters). The reflectance function described by these can then be extrapolated to zero phase angle to obtain an estimate of the geometric albedo. For very bright, solid, airless objects such as Saturn's moons Enceladus and Tethys, whose total reflectance (Bond albedo) is close to one, a strong opposition effect combines with the high Bond albedo to give them a geometric albedo above unity (1.4 in the case of Enceladus). Light is preferentially reflected straight back to its source even at low angle of incidence such as on the limb or from a slope, whereas a Lambertian surface would scatter the radiation much more broadly. A geometric albedo above unity means that the intensity of light scattered back per unit solid angle towards the source is higher than is possible for any Lambertian surface. Stars Stars shine intrinsically, but they can also reflect light. In a close binary star system polarimetry can be used to measure the light reflected from one star off another (and vice versa) and therefore also the geometric albedos of the two stars. This task has been accomplished for the two components of the Spica system, with the geometric albedo of Spica A and B being measured as 0.0361 and 0.0136 respectively. The geometric albedos of stars are in general small, for the Sun a value of 0.001 is expected, but for hotter or lower-gravity (i.e. giant) stars the amount of reflected light is expected to be several times that of the stars in the Spica system. Equivalent definitions For the hypothetical case of a plane surface, the geometric albedo is the albedo of the surface when the illumination is provided by a beam of radiation that comes in perpendicular to the surface. Examples The geometric albedo may be greater or smaller than the Bond albedo, depending on surface and atmospheric properties of the body in question. Some examples:
Physical sciences
Planetary science
Astronomy
906783
https://en.wikipedia.org/wiki/Constructible%20function
Constructible function
In complexity theory, a time-constructible function is a function f from natural numbers to natural numbers with the property that f(n) can be constructed from n by a Turing machine in the time of order f(n). The purpose of such a definition is to exclude functions that do not provide an upper bound on the runtime of some Turing machine. Time-constructible definitions There are two different definitions of a time-constructible function. In the first definition, a function f is called time-constructible if there exists a positive integer n0 and Turing machine M which, given a string 1n consisting of n ones, stops after exactly f(n) steps for all n ≥ n0. In the second definition, a function f is called time-constructible if there exists a Turing machine M which, given a string 1n, outputs the binary representation of f(n) in O(f(n)) time (a unary representation may be used instead, since the two can be interconverted in O(f(n)) time). There is also a notion of a fully time-constructible function. A function f is called fully time-constructible if there exists a Turing machine M which, given a string 1n consisting of n ones, stops after exactly f(n) steps. This definition is slightly less general than the first two but, for most applications, either definition can be used. Space-constructible definitions Similarly, a function f is space-constructible if there exists a positive integer n0 and a Turing machine M which, given a string 1n consisting of n ones, halts after using exactly f(n) cells for all n ≥ n0. Equivalently, a function f is space-constructible if there exists a Turing machine M which, given a string 1n consisting of n ones, outputs the binary (or unary) representation of f(n), while using only O(f(n)) space. Also, a function f is fully space-constructible if there exists a Turing machine M which, given a string 1n consisting of n ones, halts after using exactly f(n) cells. Examples All the commonly used functions f(n) (such as n, nk, 2n) are time- and space-constructible, as long as f(n) is at least cn for a constant c > 0. No function which is o(n) can be time-constructible unless it is eventually constant, since there is insufficient time to read the entire input. However, is a space-constructible function. Applications Time-constructible functions are used in results from complexity theory such as the time hierarchy theorem. They are important because the time hierarchy theorem relies on Turing machines that must determine in O(f(n)) time whether an algorithm has taken more than f(n) steps. This is, of course, impossible without being able to calculate f(n) in that time. Such results are typically true for all natural functions f but not necessarily true for artificially constructed f. To formulate them precisely, it is necessary to have a precise definition for a natural function f for which the theorem is true. Time-constructible functions are often used to provide such a definition. Space-constructible functions are used similarly, for example in the space hierarchy theorem.
Mathematics
Complexity theory
null
906878
https://en.wikipedia.org/wiki/Metamaterial
Metamaterial
A metamaterial (from the Greek word μετά meta, meaning "beyond" or "after", and the Latin word materia, meaning "matter" or "material") is a type of material engineered to have a property, typically rarely observed in naturally occurring materials, that is derived not from the properties of the base materials but from their newly designed structures. Metamaterials are usually fashioned from multiple materials, such as metals and plastics, and are usually arranged in repeating patterns, at scales that are smaller than the wavelengths of the phenomena they influence. Their precise shape, geometry, size, orientation, and arrangement give them their "smart" properties of manipulating electromagnetic, acoustic, or even seismic waves: by blocking, absorbing, enhancing, or bending waves, to achieve benefits that go beyond what is possible with conventional materials. Appropriately designed metamaterials can affect waves of electromagnetic radiation or sound in a manner not observed in bulk materials. Those that exhibit a negative index of refraction for particular wavelengths have been the focus of a large amount of research. These materials are known as negative-index metamaterials. Potential applications of metamaterials are diverse and include sports equipment optical filters, medical devices, remote aerospace applications, sensor detection and infrastructure monitoring, smart solar power management, Lasers, crowd control, radomes, high-frequency battlefield communication and lenses for high-gain antennas, improving ultrasonic sensors, and even shielding structures from earthquakes. Metamaterials offer the potential to create super-lenses. Such a lens can allow imaging below the diffraction limit that is the minimum resolution d=λ/(2NA) that can be achieved by conventional lenses having a numerical aperture NA and with illumination wavelength λ. Sub-wavelength optical metamaterials, when integrated with optical recording media, can be used to achieve optical data density higher than limited by diffraction. A form of 'invisibility' was demonstrated using gradient-index materials. Acoustic and seismic metamaterials are also research areas. Metamaterial research is interdisciplinary and involves such fields as electrical engineering, electromagnetics, classical optics, solid state physics, microwave and antenna engineering, optoelectronics, material sciences, nanoscience and semiconductor engineering. History Explorations of artificial materials for manipulating electromagnetic waves began at the end of the 19th century. Some of the earliest structures that may be considered metamaterials were studied by Jagadish Chandra Bose, who in 1898 researched substances with chiral properties. Karl Ferdinand Lindman studied wave interaction with metallic helices as artificial chiral media in the early twentieth century. In the late 1940s, Winston E. Kock from AT&T Bell Laboratories developed materials that had similar characteristics to metamaterials. In the 1950s and 1960s, artificial dielectrics were studied for lightweight microwave antennas. Microwave radar absorbers were researched in the 1980s and 1990s as applications for artificial chiral media. Negative-index materials were first described theoretically by Victor Veselago in 1967. He proved that such materials could transmit light. He showed that the phase velocity could be made anti-parallel to the direction of Poynting vector. This is contrary to wave propagation in naturally occurring materials. In 1995, John M. Guerra fabricated a sub-wavelength transparent grating (later called a photonic metamaterial) having 50 nm lines and spaces, and then coupled it with a standard oil immersion microscope objective (the combination later called a super-lens) to resolve a grating in a silicon wafer also having 50 nm lines and spaces. This super-resolved image was achieved with illumination having a wavelength of 650 nm in air. In 2000, John Pendry was the first to identify a practical way to make a left-handed metamaterial, a material in which the right-hand rule is not followed. Such a material allows an electromagnetic wave to convey energy (have a group velocity) against its phase velocity. Pendry's idea was that metallic wires aligned along the direction of a wave could provide negative permittivity (dielectric function ε < 0). Natural materials (such as ferroelectrics) display negative permittivity; the challenge was achieving negative permeability (μ < 0). In 1999 Pendry demonstrated that a split ring (C shape) with its axis placed along the direction of wave propagation could do so. In the same paper, he showed that a periodic array of wires and rings could give rise to a negative refractive index. Pendry also proposed a related negative-permeability design, the Swiss roll. In 2000, David R. Smith et al. reported the experimental demonstration of functioning electromagnetic metamaterials by horizontally stacking, periodically, split-ring resonators and thin wire structures. A method was provided in 2002 to realize negative-index metamaterials using artificial lumped-element loaded transmission lines in microstrip technology. In 2003, complex (both real and imaginary parts of) negative refractive index and imaging by flat lens using left handed metamaterials were demonstrated. By 2007, experiments that involved negative refractive index had been conducted by many groups. At microwave frequencies, the first, imperfect invisibility cloak was realized in 2006. From the standpoint of governing equations, contemporary researchers can classify the realm of metamaterials into three primary branches: Electromagnetic/Optical wave metamaterials, other wave metamaterials, and diffusion metamaterials. These branches are characterized by their respective governing equations, which include Maxwell's equations (a wave equation describing transverse waves), other wave equations (for longitudinal and transverse waves), and diffusion equations (pertaining to diffusion processes). Crafted to govern a range of diffusion activities, diffusion metamaterials prioritize diffusion length as their central metric. This crucial parameter experiences temporal fluctuations while remaining immune to frequency variations. In contrast, wave metamaterials, designed to adjust various wave propagation paths, consider the wavelength of incoming waves as their essential metric. This wavelength remains constant over time, though it adjusts with frequency alterations. Fundamentally, the key metrics for diffusion and wave metamaterials present a stark divergence, underscoring a distinct complementary relationship between them. For comprehensive information, refer to Section I.B, "Evolution of metamaterial physics," in Ref. Electromagnetic metamaterials An electromagnetic metamaterial affects electromagnetic waves that impinge on or interact with its structural features, which are smaller than the wavelength. To behave as a homogeneous material accurately described by an effective refractive index, its features must be much smaller than the wavelength. The unusual properties of metamaterials arise from the resonant response of each constituent element rather than their spatial arrangement into a lattice. It allows considering the local effective material parameters (permittivity and permeability). The resonance effect related to the mutual arrangement of elements is responsible for Bragg scattering, which underlies the physics of photonic crystals, another class of electromagnetic materials. Unlike the local resonances, Bragg scattering and corresponding Bragg stop-band have a low-frequency limit determined by the lattice spacing. The subwavelength approximation ensures that the Bragg stop-bands with the strong spatial dispersion effects are at higher frequencies and can be neglected. The criterion for shifting the local resonance below the lower Bragg stop-band make it possible to build a photonic phase transition diagram in a parameter space, for example, size and permittivity of the constituent element. Such diagram displays the domain of structure parameters allowing the metamaterial properties observation in the electromagnetic material. For microwave radiation, the features are on the order of millimeters. Microwave frequency metamaterials are usually constructed as arrays of electrically conductive elements (such as loops of wire) that have suitable inductive and capacitive characteristics. Many microwave metamaterials use split-ring resonators. Photonic metamaterials are structured on the nanometer scale and manipulate light at optical frequencies. Photonic crystals and frequency-selective surfaces such as diffraction gratings, dielectric mirrors and optical coatings exhibit similarities to subwavelength structured metamaterials. However, these are usually considered distinct from metamaterials, as their function arises from diffraction or interference and thus cannot be approximated as a homogeneous material. However, material structures such as photonic crystals are effective in the visible light spectrum. The middle of the visible spectrum has a wavelength of approximately 560 nm (for sunlight). Photonic crystal structures are generally half this size or smaller, that is < 280 nm. Plasmonic metamaterials utilize surface plasmons, which are packets of electrical charge that collectively oscillate at the surfaces of metals at optical frequencies. Frequency selective surfaces (FSS) can exhibit subwavelength characteristics and are known variously as artificial magnetic conductors (AMC) or High Impedance Surfaces (HIS). FSS display inductive and capacitive characteristics that are directly related to their subwavelength structure. Electromagnetic metamaterials can be divided into different classes, as follows: Negative refractive index Negative-index metamaterials (NIM) are characterized by a negative index of refraction. Other terms for NIMs include "left-handed media", "media with a negative refractive index", and "backward-wave media". NIMs where the negative index of refraction arises from simultaneously negative permittivity and negative permeability are also known as double negative metamaterials or double negative materials (DNG). Assuming a material well-approximated by a real permittivity and permeability, the relationship between permittivity , permeability and refractive index n is given by . All known non-metamaterial transparent materials (glass, water, ...) possess positive and . By convention the positive square root is used for n. However, some engineered metamaterials have and . Because the product is positive, n is real. Under such circumstances, it is necessary to take the negative square root for n. When both and are positive (negative), waves travel in the forward (backward) direction. Electromagnetic waves cannot propagate in materials with and of opposite sign as the refractive index becomes imaginary. Such materials are opaque for electromagnetic radiation and examples include plasmonic materials such as metals (gold, silver, ...). The foregoing considerations are simplistic for actual materials, which must have complex-valued and . The real parts of both and do not have to be negative for a passive material to display negative refraction. Indeed, a negative refractive index for circularly polarized waves can also arise from chirality. Metamaterials with negative n have numerous interesting properties: Snell's law (n1sinθ1 = n2sinθ2) still describes refraction, but as n2 is negative, incident and refracted rays are on the same side of the surface normal at an interface of positive and negative index materials. Cherenkov radiation points the other way. The time-averaged Poynting vector is antiparallel to phase velocity. However, for waves (energy) to propagate, a –μ must be paired with a –ε in order to satisfy the wave number dependence on the material parameters . Negative index of refraction derives mathematically from the vector triplet E, H and k. For plane waves propagating in electromagnetic metamaterials, the electric field, magnetic field and wave vector follow a left-hand rule, the reverse of the behavior of conventional optical materials. To date, only metamaterials exhibit a negative index of refraction. Single negative Single negative (SNG) metamaterials have either negative relative permittivity (εr) or negative relative permeability (μr), but not both. They act as metamaterials when combined with a different, complementary SNG, jointly acting as a DNG. Epsilon negative media (ENG) display a negative εr while μr is positive. Many plasmas exhibit this characteristic. For example, noble metals such as gold or silver are ENG in the infrared and visible spectrums. Mu-negative media (MNG) display a positive εr and negative μr. Gyrotropic or gyromagnetic materials exhibit this characteristic. A gyrotropic material is one that has been altered by the presence of a quasistatic magnetic field, enabling a magneto-optic effect. A magneto-optic effect is a phenomenon in which an electromagnetic wave propagates through such a medium. In such a material, left- and right-rotating elliptical polarizations can propagate at different speeds. When light is transmitted through a layer of magneto-optic material, the result is called the Faraday effect: the polarization plane can be rotated, forming a Faraday rotator. The results of such a reflection are known as the magneto-optic Kerr effect (not to be confused with the nonlinear Kerr effect). Two gyrotropic materials with reversed rotation directions of the two principal polarizations are called optical isomers. Joining a slab of ENG material and slab of MNG material resulted in properties such as resonances, anomalous tunneling, transparency and zero reflection. Like negative-index materials, SNGs are innately dispersive, so their εr, μr and refraction index n, are a function of frequency. Hyperbolic Hyperbolic metamaterials (HMMs) behave as a metal for certain polarization or direction of light propagation and behave as a dielectric for the other due to the negative and positive permittivity tensor components, giving extreme anisotropy. The material's dispersion relation in wavevector space forms a hyperboloid and therefore it is called a hyperbolic metamaterial. The extreme anisotropy of HMMs leads to directional propagation of light within and on the surface. HMMs have showed various potential applications, such as sensing, reflection modulator, imaging, steering of optical signals, enhanced plasmon resonance effects. Bandgap Electromagnetic bandgap metamaterials (EBG or EBM) control light propagation. This is accomplished either with photonic crystals (PC) or left-handed materials (LHM). PCs can prohibit light propagation altogether. Both classes can allow light to propagate in specific, designed directions and both can be designed with bandgaps at desired frequencies. The period size of EBGs is an appreciable fraction of the wavelength, creating constructive and destructive interference. PC are distinguished from sub-wavelength structures, such as tunable metamaterials, because the PC derives its properties from its bandgap characteristics. PCs are sized to match the wavelength of light, versus other metamaterials that expose sub-wavelength structure. Furthermore, PCs function by diffracting light. In contrast, metamaterial does not use diffraction. PCs have periodic inclusions that inhibit wave propagation due to the inclusions' destructive interference from scattering. The photonic bandgap property of PCs makes them the electromagnetic analog of electronic semi-conductor crystals. EBGs have the goal of creating high quality, low loss, periodic, dielectric structures. An EBG affects photons in the same way semiconductor materials affect electrons. PCs are the perfect bandgap material, because they allow no light propagation. Each unit of the prescribed periodic structure acts like one atom, albeit of a much larger size. EBGs are designed to prevent the propagation of an allocated bandwidth of frequencies, for certain arrival angles and polarizations. Various geometries and structures have been proposed to fabricate EBG's special properties. In practice it is impossible to build a flawless EBG device. EBGs have been manufactured for frequencies ranging from a few gigahertz (GHz) to a few terahertz (THz), radio, microwave and mid-infrared frequency regions. EBG application developments include a transmission line, woodpiles made of square dielectric bars and several different types of low gain antennas. Double positive medium Double positive mediums (DPS) do occur in nature, such as naturally occurring dielectrics. Permittivity and magnetic permeability are both positive and wave propagation is in the forward direction. Artificial materials have been fabricated which combine DPS, ENG and MNG properties. Bi-isotropic and bianisotropic Categorizing metamaterials into double or single negative, or double positive, normally assumes that the metamaterial has independent electric and magnetic responses described by ε and μ. However, in many cases, the electric field causes magnetic polarization, while the magnetic field induces electrical polarization, known as magnetoelectric coupling. Such media are denoted as bi-isotropic. Media that exhibit magnetoelectric coupling and that are anisotropic (which is the case for many metamaterial structures), are referred to as bi-anisotropic. Four material parameters are intrinsic to magnetoelectric coupling of bi-isotropic media. They are the electric (E) and magnetic (H) field strengths, and electric (D) and magnetic (B) flux densities. These parameters are ε, μ, κ and χ or permittivity, permeability, strength of chirality, and the Tellegen parameter, respectively. In this type of media, material parameters do not vary with changes along a rotated coordinate system of measurements. In this sense they are invariant or scalar. The intrinsic magnetoelectric parameters, κ and χ, affect the phase of the wave. The effect of the chirality parameter is to split the refractive index. In isotropic media this results in wave propagation only if ε and μ have the same sign. In bi-isotropic media with χ assumed to be zero, and κ a non-zero value, different results appear. Either a backward wave or a forward wave can occur. Alternatively, two forward waves or two backward waves can occur, depending on the strength of the chirality parameter. In the general case, the constitutive relations for bi-anisotropic materials read where and are the permittivity and the permeability tensors, respectively, whereas and are the two magneto-electric tensors. If the medium is reciprocal, permittivity and permeability are symmetric tensors, and , where is the chiral tensor describing chiral electromagnetic and reciprocal magneto-electric response. The chiral tensor can be expressed as , where is the trace of , I is the identity matrix, N is a symmetric trace-free tensor, and J is an antisymmetric tensor. Such decomposition allows us to classify the reciprocal bianisotropic response and we can identify the following three main classes: (i) chiral media (), (ii) pseudochiral media (), (iii) omega media (). Chiral Handedness of metamaterials is a potential source of confusion as the metamaterial literature includes two conflicting uses of the terms left- and right-handed. The first refers to one of the two circularly polarized waves that are the propagating modes in chiral media. The second relates to the triplet of electric field, magnetic field and Poynting vector that arise in negative refractive index media, which in most cases are not chiral. Generally a chiral and/or bianisotropic electromagnetic response is a consequence of 3D geometrical chirality: 3D-chiral metamaterials are composed by embedding 3D-chiral structures in a host medium and they show chirality-related polarization effects such as optical activity and circular dichroism. The concept of 2D chirality also exists and a planar object is said to be chiral if it cannot be superposed onto its mirror image unless it is lifted from the plane. 2D-chiral metamaterials that are anisotropic and lossy have been observed to exhibit directionally asymmetric transmission (reflection, absorption) of circularly polarized waves due to circular conversion dichrosim. On the other hand, bianisotropic response can arise from geometrical achiral structures possessing neither 2D nor 3D intrinsic chirality. Plum and colleagues investigated magneto-electric coupling due to extrinsic chirality, where the arrangement of a (achiral) structure together with the radiation wave vector is different from its mirror image, and observed large, tuneable linear optical activity, nonlinear optical activity, specular optical activity and circular conversion dichroism. Rizza et al. suggested 1D chiral metamaterials where the effective chiral tensor is not vanishing if the system is geometrically one-dimensional chiral (the mirror image of the entire structure cannot be superposed onto it by using translations without rotations). 3D-chiral metamaterials are constructed from chiral materials or resonators in which the effective chirality parameter is non-zero. Wave propagation properties in such chiral metamaterials demonstrate that negative refraction can be realized in metamaterials with a strong chirality and positive and . This is because the refractive index has distinct values for left and right circularly polarized waves, given by It can be seen that a negative index will occur for one polarization if > . In this case, it is not necessary that either or both and be negative for backward wave propagation. A negative refractive index due to chirality was first observed simultaneously and independently by Plum et al. and Zhang et al. in 2009. FSS based Frequency selective surface-based metamaterials block signals in one waveband and pass those at another waveband. They have become an alternative to fixed frequency metamaterials. They allow for optional changes of frequencies in a single medium, rather than the restrictive limitations of a fixed frequency response. Other types Elastic These metamaterials use different parameters to achieve a negative index of refraction in materials that are not electromagnetic. Furthermore, "a new design for elastic metamaterials that can behave either as liquids or solids over a limited frequency range may enable new applications based on the control of acoustic, elastic and seismic waves." They are also called mechanical metamaterials. Acoustic Acoustic metamaterials control, direct and manipulate sound in the form of sonic, infrasonic or ultrasonic waves in gases, liquids and solids. As with electromagnetic waves, sonic waves can exhibit negative refraction. Control of sound waves is mostly accomplished through the bulk modulus β, mass density ρ and chirality. The bulk modulus and density are analogs of permittivity and permeability in electromagnetic metamaterials. Related to this is the mechanics of sound wave propagation in a lattice structure. Also materials have mass and intrinsic degrees of stiffness. Together, these form a resonant system and the mechanical (sonic) resonance may be excited by appropriate sonic frequencies (for example audible pulses). Structural Structural metamaterials provide properties such as crushability and light weight. Using projection micro-stereolithography, microlattices can be created using forms much like trusses and girders. Materials four orders of magnitude stiffer than conventional aerogel, but with the same density have been created. Such materials can withstand a load of at least 160,000 times their own weight by over-constraining the materials. A ceramic nanotruss metamaterial can be flattened and revert to its original state. Thermal Typically materials found in nature, when homogeneous, are thermally isotropic. That is to say, heat passes through them at roughly the same rate in all directions. However, thermal metamaterials are anisotropic usually due to their highly organized internal structure. Composite materials with highly aligned internal particles or structures, such as fibers, and carbon nanotubes (CNT), are examples of this. Nonlinear Metamaterials may be fabricated that include some form of nonlinear media, whose properties change with the power of the incident wave. Nonlinear media are essential for nonlinear optics. Most optical materials have a relatively weak response, meaning that their properties change by only a small amount for large changes in the intensity of the electromagnetic field. The local electromagnetic fields of the inclusions in nonlinear metamaterials can be much larger than the average value of the field. Besides, remarkable nonlinear effects have been predicted and observed if the metamaterial effective dielectric permittivity is very small (epsilon-near-zero media). In addition, exotic properties such as a negative refractive index, create opportunities to tailor the phase matching conditions that must be satisfied in any nonlinear optical structure. Liquid Metafluids offer programmable properties such as viscosity, compressibility, and optical. One approach employed 50-500 micron diameter air-filled elastomer spheres suspended in silicon oil. The spheres compress under pressure, and regain their shape when the pressure is relieved. Their properties differ across those two states. Unpressurized, they scatter light, making them opaque. Under pressure, they collapse into half-moon shapes, focusing light, and becoming transparent. The pressure response could allow them to act as a sensor or as a dynamic hydraulic fluid. Like cornstarch, it can act as either a Newtonian or a non-Newtonian fluid. Under pressure, it becomes non-Newtonian – meaning its viscosity changes in response to shear force. Hall metamaterials In 2009, Marc Briane and Graeme Milton proved mathematically that one can in principle invert the sign of a 3 materials based composite in 3D made out of only positive or negative sign Hall coefficient materials. Later in 2015 Muamer Kadic et al. showed that a simple perforation of isotropic material can lead to its change of sign of the Hall coefficient. This theoretical claim was finally experimentally demonstrated by Christian Kern et al. In 2015, it was also demonstrated by Christian Kern et al. that an anisotropic perforation of a single material can lead to a yet more unusual effect namely the parallel Hall effect. This means that the induced electric field inside a conducting media is no longer orthogonal to the current and the magnetic field but is actually parallel to the latest. Meta-biomaterials Meta-biomaterials have been purposefully crafted to engage with biological systems, amalgamating principles from both metamaterial science and biological areas. Engineered at the nanoscale, these materials adeptly manipulate electromagnetic, acoustic, or thermal properties to facilitate biological processes. Through meticulous adjustment of their structure and composition, meta-biomaterials hold promise in augmenting various biomedical technologies such as medical imaging, drug delivery, and tissue engineering. This underscores the importance of comprehending biological systems through the interdisciplinary lens of materials science. Frequency bands Terahertz Terahertz metamaterials interact at terahertz frequencies, usually defined as 0.1 to 10 THz. Terahertz radiation lies at the far end of the infrared band, just after the end of the microwave band. This corresponds to millimeter and submillimeter wavelengths between the 3 mm (EHF band) and 0.03 mm (long-wavelength edge of far-infrared light). Photonic Photonic metamaterial interact with optical frequencies (mid-infrared). The sub-wavelength period distinguishes them from photonic band gap structures. Tunable Tunable metamaterials allow arbitrary adjustments to frequency changes in the refractive index. A tunable metamaterial expands beyond the bandwidth limitations in left-handed materials by constructing various types of metamaterials. Plasmonic Plasmonic metamaterials exploit surface plasmons, which are produced from the interaction of light with metal-dielectrics. Under specific conditions, the incident light couples with the surface plasmons to create self-sustaining, propagating electromagnetic waves or surface waves known as surface plasmon polaritons. Bulk plasma oscillations make possible the effect of negative mass (density). Applications Metamaterials are under consideration for many applications. Metamaterial antennas are commercially available. In 2007, one researcher stated that for metamaterial applications to be realized, energy loss must be reduced, materials must be extended into three-dimensional isotropic materials and production techniques must be industrialized. Antennas Metamaterial antennas are a class of antennas that use metamaterials to improve performance. Demonstrations showed that metamaterials could enhance an antenna's radiated power. Materials that can attain negative permeability allow for properties such as small antenna size, high directivity and tunable frequency. Absorber A metamaterial absorber manipulates the loss components of metamaterials' permittivity and magnetic permeability, to absorb large amounts of electromagnetic radiation. This is a useful feature for photodetection and solar photovoltaic applications. Loss components are also relevant in applications of negative refractive index (photonic metamaterials, antenna systems) or transformation optics (metamaterial cloaking, celestial mechanics), but often are not used in these applications. Superlens A superlens is a two or three-dimensional device that uses metamaterials, usually with negative refraction properties, to achieve resolution beyond the diffraction limit (ideally, infinite resolution). Such a behaviour is enabled by the capability of double-negative materials to yield negative phase velocity. The diffraction limit is inherent in conventional optical devices or lenses. Cloaking devices Metamaterials are a potential basis for a practical cloaking device. The proof of principle was demonstrated on October 19, 2006. No practical cloaks are publicly known to exist. Radar cross-section (RCS-)reducing metamaterials Metamaterials have applications in stealth technology, which reduces RCS in any of various ways (e.g., absorption, diffusion, redirection). Conventionally, the RCS has been reduced either by radar-absorbent material (RAM) or by purpose shaping of the targets such that the scattered energy can be redirected away from the source. While RAMs have narrow frequency band functionality, purpose shaping limits the aerodynamic performance of the target. More recently, metamaterials or metasurfaces are synthesized that can redirect the scattered energy away from the source using either array theory or generalized Snell's law. This has led to aerodynamically favorable shapes for the targets with the reduced RCS. Seismic protection Seismic metamaterials counteract the adverse effects of seismic waves on man-made structures. Sound filtering Metamaterials textured with nanoscale wrinkles could control sound or light signals, such as changing a material's color or improving ultrasound resolution. Uses include nondestructive material testing, medical diagnostics and sound suppression. The materials can be made through a high-precision, multi-layer deposition process. The thickness of each layer can be controlled within a fraction of a wavelength. The material is then compressed, creating precise wrinkles whose spacing can cause scattering of selected frequencies. Guided mode manipulations Metamaterials can be integrated with optical waveguides to tailor guided electromagnetic waves (meta-waveguide). Subwavelength structures like metamaterials can be integrated with for instance silicon waveguides to develop and polarization beam splitters and optical couplers, adding new degrees of freedom of controlling light propagation at nanoscale for integrated photonic devices. Other applications such as integrated mode converters, polarization (de)multiplexers, structured light generation, and on-chip bio-sensors can be developed. Theoretical models All materials are made of atoms, which are dipoles. These dipoles modify light velocity by a factor n (the refractive index). In a split ring resonator the ring and wire units act as atomic dipoles: the wire acts as a ferroelectric atom, while the ring acts as an inductor L, while the open section acts as a capacitor C. The ring as a whole acts as an LC circuit. When the electromagnetic field passes through the ring, an induced current is created. The generated field is perpendicular to the light's magnetic field. The magnetic resonance results in a negative permeability; the refraction index is negative as well. (The lens is not truly flat, since the structure's capacitance imposes a slope for the electric induction.) Several (mathematical) material models frequency response in DNGs. One of these is the Lorentz model, which describes electron motion in terms of a driven-damped, harmonic oscillator. The Debye relaxation model applies when the acceleration component of the Lorentz mathematical model is small compared to the other components of the equation. The Drude model applies when the restoring force component is negligible and the coupling coefficient is generally the plasma frequency. Other component distinctions call for the use of one of these models, depending on its polarity or purpose. Three-dimensional composites of metal/non-metallic inclusions periodically/randomly embedded in a low permittivity matrix are usually modeled by analytical methods, including mixing formulas and scattering-matrix based methods. The particle is modeled by either an electric dipole parallel to the electric field or a pair of crossed electric and magnetic dipoles parallel to the electric and magnetic fields, respectively, of the applied wave. These dipoles are the leading terms in the multipole series. They are the only existing ones for a homogeneous sphere, whose polarizability can be easily obtained from the Mie scattering coefficients. In general, this procedure is known as the "point-dipole approximation", which is a good approximation for metamaterials consisting of composites of electrically small spheres. Merits of these methods include low calculation cost and mathematical simplicity. Three conceptions- negative-index medium, non-reflecting crystal and superlens are foundations of the metamaterial theory. Other first principles techniques for analyzing triply-periodic electromagnetic media may be found in Computing photonic band structure Institutional networks MURI The Multidisciplinary University Research Initiative (MURI) encompasses dozens of Universities and a few government organizations. Participating universities include UC Berkeley, UC Los Angeles, UC San Diego, Massachusetts Institute of Technology, and Imperial College in London. The sponsors are Office of Naval Research and the Defense Advanced Research Project Agency. MURI supports research that intersects more than one traditional science and engineering discipline to accelerate both research and translation to applications. As of 2009, 69 academic institutions were expected to participate in 41 research efforts. Metamorphose The Virtual Institute for Artificial Electromagnetic Materials and Metamaterials "Metamorphose VI AISBL" is an international association to promote artificial electromagnetic materials and metamaterials. It organizes scientific conferences, supports specialized journals, creates and manages research programs, provides training programs (including PhD and training programs for industrial partners); and technology transfer to European Industry.
Physical sciences
Basics_9
null
906890
https://en.wikipedia.org/wiki/Benthic%20zone
Benthic zone
The benthic zone is the ecological region at the lowest level of a body of water such as an ocean, lake, or stream, including the sediment surface and some sub-surface layers. The name comes from the Ancient Greek word (), meaning "the depths". Organisms living in this zone are called benthos and include microorganisms (e.g., bacteria and fungi) as well as larger invertebrates, such as crustaceans and polychaetes. Organisms here, known as bottom dwellers, generally live in close relationship with the substrate and many are permanently attached to the bottom. The benthic boundary layer, which includes the bottom layer of water and the uppermost layer of sediment directly influenced by the overlying water, is an integral part of the benthic zone, as it greatly influences the biological activity that takes place there. Examples of contact soil layers include sand bottoms, rocky outcrops, coral, and bay mud. Description Oceans The benthic region of the ocean begins at the shore line (intertidal or littoral zone) and extends downward along the surface of the continental shelf out to sea. Thus, the region incorporates a great variety of physical conditions differing in: depth, light penetration and pressure. Depending on the water-body, the benthic zone may include areas that are only a few inches below the surface. The continental shelf is a gently sloping benthic region that extends away from the land mass. At the continental shelf edge, usually about deep, the gradient greatly increases and is known as the continental slope. The continental slope drops down to the deep sea floor. The deep-sea floor is called the abyssal plain and is usually about deep. The ocean floor is not all flat but has submarine ridges and deep ocean trenches known as the hadal zone. For comparison, the pelagic zone is the descriptive term for the ecological region above the benthos, including the water column up to the surface. At the other end of the spectrum, benthos of the deep ocean includes the bottom levels of the oceanic abyssal zone. For information on animals that live in the deeper areas of the oceans see aphotic zone. Generally, these include life forms that tolerate cool temperatures and low oxygen levels, but this depends on the depth of the water. Lakes As with oceans, the benthic zone is the floor of the lake, composed of accumulated sunken organic matter. The littoral zone is the zone bordering the shore; light penetrates easily and aquatic plants thrive. The pelagic zone represents the broad mass of water, down as far as the depth to which no light penetrates. Organisms Benthos are the organisms that live in the benthic zone, and are different from those elsewhere in the water column; even within the benthic zone variations in such factors as light penetration, temperature and salinity give rise to distinct differences, delineated vertically, in the groups of organisms supported. Many organisms adapted to deep-water pressure cannot survive in the upper parts of the water column: the pressure difference can be very significant (approximately one atmosphere for each 10 meters of water depth). Many have adapted to live on the substrate (bottom). In their habitats they can be considered as dominant creatures, but they are often a source of prey for Carcharhinidae such as the lemon shark. Because light does not penetrate very deep into ocean-water, the energy source for the benthic ecosystem is often marine snow. Marine snow is organic matter from higher up in the water column that drifts down to the depths. This dead and decaying matter sustains the benthic food chain; most organisms in the benthic zone are scavengers or detritivores. Some microorganisms use chemosynthesis to produce biomass. Benthic organisms can be divided into two categories based on whether they make their home on the ocean floor or a few centimeters into the ocean floor. Those living on the surface of the ocean floor are known as epifauna. Those who live burrowed into the ocean floor are known as infauna. Extremophiles, including piezophiles, which thrive in high pressures, may also live there. An example of benthos organism is Chorismus antarcticus. Nutrient flux Sources of food for benthic communities can derive from the water column above these habitats in the form of aggregations of detritus, inorganic matter, and living organisms. These aggregations are commonly referred to as marine snow, and are important for the deposition of organic matter, and bacterial communities. The amount of material sinking to the ocean floor can average 307,000 aggregates per m2 per day. This amount will vary on the depth of the benthos, and the degree of benthic-pelagic coupling. The benthos in a shallow region will have more available food than the benthos in the deep sea. Because of their reliance on it, microbes may become spatially dependent on detritus in the benthic zone. The microbes found in the benthic zone, specifically dinoflagellates and foraminifera, colonize quite rapidly on detritus matter while forming a symbiotic relationship with each other. In the deep sea, which covers 90–95% of the ocean floor, 90% of the total biomass is made up of prokaryotes. To release all the nutrients locked inside these microbes to the environment, viruses are important in making it available to other organisms. Habitats Modern seafloor mapping technologies have revealed linkages between seafloor geomorphology and benthic habitats, in which suites of benthic communities are associated with specific geomorphic settings. Examples include cold-water coral communities associated with seamounts and submarine canyons, kelp forests associated with inner shelf rocky reefs and rockfish associated with rocky escarpments on continental slopes. In oceanic environments, benthic habitats can also be zoned by depth. From the shallowest to the deepest are: the epipelagic (less than 200 meters), the mesopelagic (200–1,000 meters), the bathyal (1,000–4,000 meters), the abyssal (4,000–6,000 meters) and the deepest, the hadal (below 6,000 meters). The lower zones are in deep, pressurized areas of the ocean. Human impacts have occurred at all ocean depths, but are most significant on shallow continental shelf and slope habitats. Many benthic organisms have retained their historic evolutionary characteristics. Some organisms are significantly larger than their relatives living in shallower zones, largely because of higher oxygen concentration in deep water. It is not easy to map or observe these organisms and their habitats, and most modern observations are made using remotely operated underwater vehicles (ROVs), and rarely submarines. Ecological research Benthic macroinvertebrates have many important ecological functions, such as regulating the flow of materials and energy in river ecosystems through their food web linkages. Because of this correlation between flow of energy and nutrients, benthic macroinvertebrates have the ability to influence food resources on fish and other organisms in aquatic ecosystems. For example, the addition of a moderate amount of nutrients to a river over the course of several years resulted in increases in invertebrate richness, abundance, and biomass. These in turn resulted in increased food resources for native species of fish with insignificant alteration of the macroinvertebrate community structure and trophic pathways. The presence of macroinvertebrates such as Amphipoda also affect the dominance of certain types of algae in Benthic ecosystems as well. In addition, because benthic zones are influenced by the flow of dead organic material, there have been studies conducted on the relationship between stream and river water flows and the resulting effects on the benthic zone. Low flow events show a restriction in nutrient transport from benthic substrates to food webs, and caused a decrease in benthic macroinvertebrate biomass, which lead to the disappearance of food sources into the substrate. Because the benthic system regulates energy in aquatic ecosystems, studies have been made of the mechanisms of the benthic zone in order to better understand the ecosystem. Benthic diatoms have been used by the European Union's Water Framework Directive (WFD) to establish ecological quality ratios that determined the ecological status of lakes in the UK. Beginning research is being made on benthic assemblages to see if they can be used as indicators of healthy aquatic ecosystems. Benthic assemblages in urbanized coastal regions are not functionally equivalent to benthic assemblages in untouched regions. Ecologists are attempting to understand the relationship between heterogeneity and maintaining biodiversity in aquatic ecosystems. Benthic algae has been used as an inherently good subject for studying short term changes and community responses to heterogeneous conditions in streams. Understanding the potential mechanisms involving benthic periphyton and the effects on heterogeneity within a stream may provide a better understanding of the structure and function of stream ecosystems. Periphyton populations suffer from high natural spatial variability while difficult accessibility simultaneously limits the practicable number of samples that can be taken. Targeting periphyton locations which are known to provide reliable samples especially hard surfaces is recommended in the European Union benthic monitoring program (by Kelly 1998 for the United Kingdom then in the EU and for the EU as a whole by CEN 2003 and CEN 2004) and in some United States programs (by Moulton et al. 2002). Benthic gross primary production (GPP) may be important in maintaining biodiversity hotspots in littoral zones in large lake ecosystems. However, the relative contributions of benthic habitats within specific ecosystems are poorly explored and more research is planned.
Physical sciences
Oceanography
Earth science
908032
https://en.wikipedia.org/wiki/Hair%20iron
Hair iron
A hair iron or hair tong is a tool used to change the arrangement of the hair using heat. There are three general kinds: curling irons, used to make the hair curl; straightening irons, commonly called straighteners or flat irons, used to straighten the hair; and crimping irons, used to create crimps of the desired size in the hair. Most models have electric heating; cordless curling irons or flat irons typically use butane, and some flat irons use batteries that can last up to 30 minutes for straightening. Overuse of these tools can cause severe damage to hair. Types of hair irons Curling iron Curling irons, also known as curling tongs, create waves or curls in hair using a variety of different methods. There are many different types of modern curling irons, which can vary by diameter, material, and shape of barrel and the type of handle. The barrel's diameter can be anywhere from to . Smaller barrels typically create spiral curls or ringlets, and larger barrels are used to give shape and volume to a hairstyle. Curling irons are typically made of ceramic, metal, Teflon, titanium, tourmaline. The barrel's shape can either be a cone, reverse cone, or cylinder, and the iron can have brush attachments or double and triple barrels. The curling iron can also have either a clipless, Marcel, or spring-loaded handle. Spring-loaded handles are the most popular and use a spring to work the barrel's clamp. When using a Marcel handle, one applies pressure to the clamp. Clipless wands have no clamp: the user simply wraps hair around a rod. Most clipless curling irons come with a Kevlar glove to avoid burns. Straightening irons Straightening irons, straighteners, or flat irons work by breaking down the positive hydrogen bonds found in the hair's cortex, which cause hair to open, bend and become curly. Once the bonds are broken, hair is prevented from holding its original, natural form, though the hydrogen bonds can re-form if exposed to moisture. Straightening irons use mainly ceramic material for their plates. Low-end straighteners use a single layer of ceramic coating on the plates, whereas high-end straighteners use multiple layers or even 100% ceramic material. Some straightening irons are fitted with an automatic shut off feature to prevent fire accidents. Early hair straightening systems relied on harsh chemicals that tended to damage the hair. In the 1870s, the French hairdresser Marcel Grateau introduced heated metal hair care implements such as hot combs to straighten hair. Madame C.J. Walker used combs with wider teeth and popularized their use together with her system of chemical scalp preparation and straightening lotions. Her mentor Annie Malone is sometimes said to have patented the hot comb. Heated metal implements slide more easily through the hair, reducing damage and dryness. Women in the 1960s sometimes used clothing irons to straighten their hair. In 1909, Isaac K. Shero patented the first hair straightener composed of two flat irons that are heated and pressed together. Ceramic and electrical straighteners were introduced later, allowing adjustment of heat settings and straightener size. A ceramic hair straightener brush was patented in 2013. Sharon Rabi released the first straightening brush in 2015 under the DAFNI brand name. The ceramic straightening brush has a larger surface area than a traditional flat iron. Crimping irons Crimping irons or crimpers work by crimping hair in sawtooth style. The look is similar to the crimps left after taking out small braids. Crimping irons come in different sizes with different sized ridges on the paddles. Larger ridges produce larger crimps in the hair and smaller ridges produce smaller crimps. Crimped hair was very popular in the 1980s and 1990s.
Technology
Household appliances
null
908385
https://en.wikipedia.org/wiki/Nanoelectromechanical%20systems
Nanoelectromechanical systems
Nanoelectromechanical systems (NEMS) are a class of devices integrating electrical and mechanical functionality on the nanoscale. NEMS form the next logical miniaturization step from so-called microelectromechanical systems, or MEMS devices. NEMS typically integrate transistor-like nanoelectronics with mechanical actuators, pumps, or motors, and may thereby form physical, biological, and chemical sensors. The name derives from typical device dimensions in the nanometer range, leading to low mass, high mechanical resonance frequencies, potentially large quantum mechanical effects such as zero point motion, and a high surface-to-volume ratio useful for surface-based sensing mechanisms. Applications include accelerometers and sensors to detect chemical substances in the air. History Background As noted by Richard Feynman in his famous talk in 1959, "There's Plenty of Room at the Bottom," there are many potential applications of machines at smaller and smaller sizes; by building and controlling devices at smaller scales, all technology benefits. The expected benefits include greater efficiencies and reduced size, decreased power consumption and lower costs of production in electromechanical systems. The first silicon dioxide field effect transistors were built by Frosch and Derick in 1957 at Bell Labs. In 1960, Atalla and Kahng at Bell Labs fabricated a MOSFET with a gate oxide thickness of 100 nm. In 1962, Atalla and Kahng fabricated a nanolayer-base metal–semiconductor junction (M–S junction) transistor that used gold (Au) thin films with a thickness of 10 nm. In 1987, Bijan Davari led an IBM research team that demonstrated the first MOSFET with a 10 nm oxide thickness. Multi-gate MOSFETs enabled scaling below 20 nm channel length, starting with the FinFET. The FinFET originates from the research of Digh Hisamoto at Hitachi Central Research Laboratory in 1989. At UC Berkeley, a group led by Hisamoto and TSMC's Chenming Hu fabricated FinFET devices down to 17nm channel length in 1998. NEMS In 2000, the first very-large-scale integration (VLSI) NEMS device was demonstrated by researchers at IBM. Its premise was an array of AFM tips which can heat/sense a deformable substrate in order to function as a memory device (Millipede memory). Further devices have been described by Stefan de Haan. In 2007, the International Technical Roadmap for Semiconductors (ITRS) contains NEMS memory as a new entry for the Emerging Research Devices section. Atomic force microscopy A key application of NEMS is atomic force microscope tips. The increased sensitivity achieved by NEMS leads to smaller and more efficient sensors to detect stresses, vibrations, forces at the atomic level, and chemical signals. AFM tips and other detection at the nanoscale rely heavily on NEMS. Approaches to miniaturization Two complementary approaches to fabrication of NEMS can be found, the top-down approach and the bottom-up approach. The top-down approach uses the traditional microfabrication methods, i.e. optical, electron-beam lithography and thermal treatments, to manufacture devices. While being limited by the resolution of these methods, it allows a large degree of control over the resulting structures. In this manner devices such as nanowires, nanorods, and patterned nanostructures are fabricated from metallic thin films or etched semiconductor layers. For top-down approaches, increasing surface area to volume ratio enhances the reactivity of nanomaterials. Bottom-up approaches, in contrast, use the chemical properties of single molecules to cause single-molecule components to self-organize or self-assemble into some useful conformation, or rely on positional assembly. These approaches utilize the concepts of molecular self-assembly and/or molecular recognition. This allows fabrication of much smaller structures, albeit often at the cost of limited control of the fabrication process. Furthermore, while there are residue materials removed from the original structure for the top-down approach, minimal material is removed or wasted for the bottom-up approach. A combination of these approaches may also be used, in which nanoscale molecules are integrated into a top-down framework. One such example is the carbon nanotube nanomotor. Materials Carbon allotropes Many of the commonly used materials for NEMS technology have been carbon based, specifically diamond, carbon nanotubes and graphene. This is mainly because of the useful properties of carbon based materials which directly meet the needs of NEMS. The mechanical properties of carbon (such as large Young's modulus) are fundamental to the stability of NEMS while the metallic and semiconductor conductivities of carbon based materials allow them to function as transistors. Both graphene and diamond exhibit high Young's modulus, low density, low friction, exceedingly low mechanical dissipation, and large surface area. The low friction of CNTs, allow practically frictionless bearings and has thus been a huge motivation towards practical applications of CNTs as constitutive elements in NEMS, such as nanomotors, switches, and high-frequency oscillators. Carbon nanotubes and graphene's physical strength allows carbon based materials to meet higher stress demands, when common materials would normally fail and thus further support their use as a major materials in NEMS technological development. Along with the mechanical benefits of carbon based materials, the electrical properties of carbon nanotubes and graphene allow it to be used in many electrical components of NEMS. Nanotransistors have been developed for both carbon nanotubes as well as graphene. Transistors are one of the basic building blocks for all electronic devices, so by effectively developing usable transistors, carbon nanotubes and graphene are both very crucial to NEMS. Nanomechanical resonators are frequently made of graphene. As NEMS resonators are scaled down in size, there is a general trend for a decrease in quality factor in inverse proportion to surface area to volume ratio. However, despite this challenge, it has been experimentally proven to reach a quality factor as high as 2400.  The quality factor describes the purity of tone of the resonator's vibrations. Furthermore, it has been theoretically predicted that clamping graphene membranes on all sides yields increased quality numbers. Graphene NEMS can also function as mass, force, and position sensors. Metallic carbon nanotubes Carbon nanotubes (CNTs) are allotropes of carbon with a cylindrical nanostructure. They can be considered a rolled up graphene. When rolled at specific and discrete ("chiral") angles, and the combination of the rolling angle and radius decides whether the nanotube has a bandgap (semiconducting) or no bandgap (metallic). Metallic carbon nanotubes have also been proposed for nanoelectronic interconnects since they can carry high current densities. This is a useful property as wires to transfer current are another basic building block of any electrical system. Carbon nanotubes have specifically found so much use in NEMS that methods have already been discovered to connect suspended carbon nanotubes to other nanostructures. This allows carbon nanotubes to form complicated nanoelectric systems. Because carbon based products can be properly controlled and act as interconnects as well as transistors, they serve as a fundamental material in the electrical components of NEMS. CNT-based NEMS switches A major disadvantage of MEMS switches over NEMS switches are limited microsecond range switching speeds of MEMS, which impedes performance for high speed applications. Limitations on switching speed and actuation voltage can be overcome by scaling down devices from micro to nanometer scale. A comparison of performance parameters between carbon nanotube (CNT)-based NEMS switches with its counterpart CMOS revealed that CNT-based NEMS switches retained performance at lower levels of energy consumption and had a subthreshold leakage current several orders of magnitude smaller than that of CMOS switches. CNT-based NEMS with doubly clamped structures are being further studied as potential solutions for floating gate nonvolatile memory applications. Difficulties Despite all of the useful properties of carbon nanotubes and graphene for NEMS technology, both of these products face several hindrances to their implementation. One of the main problems is carbon's response to real life environments. Carbon nanotubes exhibit a large change in electronic properties when exposed to oxygen. Similarly, other changes to the electronic and mechanical attributes of carbon based materials must fully be explored before their implementation, especially because of their high surface area which can easily react with surrounding environments. Carbon nanotubes were also found to have varying conductivities, being either metallic or semiconducting depending on their helicity when processed. Because of this, special treatment must be given to the nanotubes during processing to assure that all of the nanotubes have appropriate conductivities. Graphene also has complicated electric conductivity properties compared to traditional semiconductors because it lacks an energy band gap and essentially changes all the rules for how electrons move through a graphene based device. This means that traditional constructions of electronic devices will likely not work and completely new architectures must be designed for these new electronic devices. Nanoelectromechanical accelerometer Graphene's mechanical and electronic properties have made it favorable for integration into NEMS accelerometers, such as small sensors and actuators for heart monitoring systems and mobile motion capture. The atomic scale thickness of graphene provides a pathway for accelerometers to be scaled down from micro to nanoscale while retaining the system's required sensitivity levels. By suspending a silicon proof mass on a double-layer graphene ribbon, a nanoscale spring-mass and piezoresistive transducer can be made with the capability of currently produced transducers in accelerometers. The spring mass provides greater accuracy, and the piezoresistive properties of graphene converts the strain from acceleration to electrical signals for the accelerometer. The suspended graphene ribbon simultaneously forms the spring and piezoresistive transducer, making efficient use of space in while improving performance of NEMS accelerometers. Polydimethylsiloxane (PDMS) Failures arising from high adhesion and friction are of concern for many NEMS. NEMS frequently utilize silicon due to well-characterized micromachining techniques; however, its intrinsic stiffness often hinders the capability of devices with moving parts. A study conducted by Ohio State researchers compared the adhesion and friction parameters of a single crystal silicon with native oxide layer against PDMS coating. PDMS is a silicone elastomer that is highly mechanically tunable, chemically inert, thermally stable, permeable to gases, transparent, non-fluorescent, biocompatible, and nontoxic. Inherent to polymers, the Young's Modulus of PDMS can vary over two orders of magnitude by manipulating the extent of crosslinking of polymer chains, making it a viable material in NEMS and biological applications. PDMS can form a tight seal with silicon and thus be easily integrated into NEMS technology, optimizing both mechanical and electrical properties. Polymers like PDMS are beginning to gain attention in NEMS due to their comparatively inexpensive, simplified, and time-efficient prototyping and manufacturing. Rest time has been characterized to directly correlate with adhesive force, and increased relative humidity lead to an increase of adhesive forces for hydrophilic polymers. Contact angle measurements and Laplace force calculations support the characterization of PDMS's hydrophobic nature, which expectedly corresponds with its experimentally verified independence to relative humidity. PDMS’ adhesive forces are also independent of rest time, capable of versatilely performing under varying relative humidity conditions, and possesses a lower coefficient of friction than that of Silicon. PDMS coatings facilitate mitigation of high-velocity problems, such as preventing sliding. Thus, friction at contact surfaces remains low even at considerably high velocities. In fact, on the microscale, friction reduces with increasing velocity. The hydrophobicity and low friction coefficient of PDMS have given rise to its potential in being further incorporated within NEMS experiments that are conducted at varying relative humidities and high relative sliding velocities. PDMS-coated piezoresistive nanoelectromechanical systems diaphragm PDMS is frequently used within NEMS technology. For instance, PDMS coating on a diaphragm can be used for chloroform vapor detection. Researchers from the National University of Singapore invented a polydimethylsiloxane (PDMS)-coated nanoelectromechanical system diaphragm embedded with silicon nanowires (SiNWs) to detect chloroform vapor at room temperature. In the presence of chloroform vapor, the PDMS film on the micro-diaphragm absorbs vapor molecules and consequently enlarges, leading to deformation of the micro-diaphragm. The SiNWs implanted within the micro-diaphragm are linked in a Wheatstone bridge, which translates the deformation into a quantitative output voltage. In addition, the micro-diaphragm sensor also demonstrates low-cost processing at low power consumption. It possesses great potential for scalability, ultra-compact footprint, and CMOS-IC process compatibility. By switching the vapor-absorption polymer layer, similar methods can be applied that should theoretically be able to detect other organic vapors. In addition to its inherent properties discussed in the Materials section, PDMS can be used to absorb chloroform, whose effects are commonly associated with swelling and deformation of the micro-diaphragm; various organic vapors were also gauged in this study. With good aging stability and appropriate packaging, the degradation rate of PDMS in response to heat, light, and radiation can be slowed. Biohybrid NEMS The emerging field of bio-hybrid systems combines biological and synthetic structural elements for biomedical or robotic applications. The constituting elements of bio-nanoelectromechanical systems (BioNEMS) are of nanoscale size, for example DNA, proteins or nanostructured mechanical parts. Examples include the facile top-down nanostructuring of thiol-ene polymers to create cross-linked and mechanically robust nanostructures that are subsequently functionalized with proteins. Simulations Computer simulations have long been important counterparts to experimental studies of NEMS devices. Through continuum mechanics and molecular dynamics (MD), important behaviors of NEMS devices can be predicted via computational modeling before engaging in experiments. Additionally, combining continuum and MD techniques enables engineers to efficiently analyze the stability of NEMS devices without resorting to ultra-fine meshes and time-intensive simulations. Simulations have other advantages as well: they do not require the time and expertise associated with fabricating NEMS devices; they can effectively predict the interrelated roles of various electromechanical effects; and parametric studies can be conducted fairly readily as compared with experimental approaches. For example, computational studies have predicted the charge distributions and “pull-in” electromechanical responses of NEMS devices. Using simulations to predict mechanical and electrical behavior of these devices can help optimize NEMS device design parameters. Reliability and Life Cycle of NEMS Reliability and Challenges Reliability provides a quantitative measure of the component's integrity and performance without failure for a specified product lifetime. Failure of NEMS devices can be attributed to a variety of sources, such as mechanical, electrical, chemical, and thermal factors. Identifying failure mechanisms, improving yield, scarcity of information, and reproducibility issues have been identified as major challenges to achieving higher levels of reliability for NEMS devices. Such challenges arise during both manufacturing stages (i.e. wafer processing, packaging, final assembly) and post-manufacturing stages (i.e. transportation, logistics, usage). Packaging                                                   Packaging challenges often account for 75–95% of the overall costs of MEMS and NEMS. Factors of wafer dicing, device thickness, sequence of final release, thermal expansion, mechanical stress isolation, power and heat dissipation, creep minimization, media isolation, and protective coatings are considered by packaging design to align with the design of the MEMS or NEMS component. Delamination analysis, motion analysis, and life-time testing have been used to assess wafer-level encapsulation techniques, such as cap to wafer, wafer to wafer, and thin film encapsulation. Wafer-level encapsulation techniques can lead to improved reliability and increased yield for both micro and nanodevices. Manufacturing Assessing the reliability of NEMS in early stages of the manufacturing process is essential for yield improvement. Forms of surface forces, such as adhesion and electrostatic forces, are largely dependent on surface topography and contact geometry. Selective manufacturing of nano-textured surfaces reduces contact area, improving both adhesion and friction performance for NEMS. Furthermore, the implementation of nanopost to engineered surfaces increase hydrophobicity, leading to a reduction in both adhesion and friction. Adhesion and friction can also be manipulated by nanopatterning to adjust surface roughness for the appropriate applications of the NEMS device. Researchers from Ohio State University used atomic/friction force microscopy (AFM/FFM) to examine the effects of nanopatterning on hydrophobicity, adhesion, and friction for hydrophilic polymers with two types of patterned asperities (low aspect ratio and high aspect ratio). Roughness on hydrophilic surfaces versus hydrophobic surfaces are found to have inversely correlated and directly correlated relationships respectively. Due to its large surface area to volume ratio and sensitivity, adhesion and friction can impede performance and reliability of NEMS devices. These tribological issues arise from natural down-scaling of these tools; however, the system can be optimized through the manipulation of the structural material, surface films, and lubricant. In comparison to undoped Si or polysilicon films, SiC films possess the lowest frictional output, resulting in increased scratch resistance and enhanced functionality at high temperatures. Hard diamond-like carbon (DLC) coatings exhibit low friction, high hardness and wear resistance, in addition to chemical and electrical resistances. Roughness, a factor that reduces wetting and increases hydrophobicity, can be optimized by increasing the contact angle to reduce wetting and allow for low adhesion and interaction of the device to its environment. Material properties are size-dependent. Therefore, analyzing the unique characteristics on NEMS and nano-scale material becomes increasingly important to retaining reliability and long-term stability of NEMS devices. Some mechanical properties, such as hardness, elastic modulus, and bend tests, for nano-materials are determined by using a nano indenter on a material that has undergone fabrication processes. These measurements, however, do not consider how the device will operate in industry under prolonged or cyclic stresses and strains. The theta structure is a NEMS model that exhibits unique mechanical properties. Composed of Si, the structure has high strength and is able to concentrate stresses at the nanoscale to measure certain mechanical properties of materials. Residual stresses To increase reliability of structural integrity, characterization of both material structure and intrinsic stresses at appropriate length scales becomes increasingly pertinent. Effects of residual stresses include but are not limited to fracture, deformation, delamination, and nanosized structural changes, which can result in failure of operation and physical deterioration of the device. Residual stresses can influence electrical and optical properties. For instance, in various photovoltaic and light emitting diodes (LED) applications, the band gap energy of semiconductors can be tuned accordingly by the effects of residual stress. Atomic force microscopy (AFM) and Raman spectroscopy can be used to characterize the distribution of residual stresses on thin films in terms of force volume imaging, topography, and force curves. Furthermore, residual stress can be used to measure nanostructures’ melting temperature by using differential scanning calorimetry (DSC) and temperature dependent X-ray Diffraction (XRD). Future Key hurdles currently preventing the commercial application of many NEMS devices include low-yields and high device quality variability. Before NEMS devices can actually be implemented, reasonable integrations of carbon based products must be created. A recent step in that direction has been demonstrated for diamond, achieving a processing level comparable to that of silicon. The focus is currently shifting from experimental work towards practical applications and device structures that will implement and profit from such novel devices. The next challenge to overcome involves understanding all of the properties of these carbon-based tools, and using the properties to make efficient and durable NEMS with low failure rates. Carbon-based materials have served as prime materials for NEMS use, because of their exceptional mechanical and electrical properties. Recently, nanowires of chalcogenide glasses have shown to be a key platform to design tunable NEMS owing to the availability of active modulation of Young's modulus. The global market of NEMS is projected to reach $108.88 million by 2022. Applications Nanoelectromechanical relay Nanoelectromechanical systems mass spectrometer Nanoelectromechanical-based cantilevers Researchers from the California Institute of Technology developed a NEM-based cantilever with mechanical resonances up to very high frequencies (VHF). It is incorporation of electronic displacement transducers based on piezoresistive thin metal film facilitates unambiguous and efficient nanodevice readout. The functionalization of the device's surface using a thin polymer coating with high partition coefficient for the targeted species enables NEMS-based cantilevers to provide chemisorption measurements at room temperature with mass resolution at less than one attogram. Further capabilities of NEMS-based cantilevers have been exploited for the applications of sensors, scanning probes, and devices operating at very high frequency (100 MHz).
Technology
Machinery and tools: General
null
908518
https://en.wikipedia.org/wiki/Interchangeable%20parts
Interchangeable parts
Interchangeable parts are parts (components) that are identical for practical purposes. They are made to specifications that ensure that they are so nearly identical that they will fit into any assembly of the same type. One such part can freely replace another, without any custom fitting, such as filing. This interchangeability allows easy assembly of new devices, and easier repair of existing devices, while minimizing both the time and skill required of the person doing the assembly or repair. The concept of interchangeability was crucial to the introduction of the assembly line at the beginning of the 20th century, and has become an important element of some modern manufacturing but is missing from other important industries. Interchangeability of parts was achieved by combining a number of innovations and improvements in machining operations and the invention of several machine tools, such as the slide rest lathe, screw-cutting lathe, turret lathe, milling machine and metal planer. Additional innovations included jigs for guiding the machine tools, fixtures for holding the workpiece in the proper position, and blocks and gauges to check the accuracy of the finished parts. Electrification allowed individual machine tools to be powered by electric motors, eliminating line shaft drives from steam engines or water power and allowing higher speeds, making modern large-scale manufacturing possible. Modern machine tools often have numerical control (NC) which evolved into CNC (computerized numeric control) when microprocessors became available. Methods for industrial production of interchangeable parts in the United States were first developed in the nineteenth century. The term American system of manufacturing was sometimes applied to them at the time, in distinction from earlier methods. Within a few decades such methods were in use in various countries, so American system is now a term of historical reference rather than current industrial nomenclature. First use Evidence of the use of interchangeable parts can be traced back over two thousand years to Carthage in the First Punic War. Carthaginian ships had standardized, interchangeable parts that even came with assembly instructions akin to "tab A into slot B" marked on them. Origins of the modern concept In the late-18th century, French General Jean-Baptiste Vaquette de Gribeauval promoted standardized weapons in what became known as the after it was issued as a royal order in 1765. (At the time the system focused on artillery more than on muskets or handguns.) One of the accomplishments of the system was that solid-cast cannons were bored to precise tolerances, which allowed the walls to be thinner than cannons poured with hollow cores. However, because cores were often off-center, the wall thickness determined the size of the bore. Standardized boring made for shorter cannons without sacrificing accuracy and range because of the tighter fit of the shells; it also allowed standardization of the shells. Before the 18th century, devices such as guns were made one at a time by gunsmiths in a unique manner. If one single component of a firearm needed a replacement, the entire firearm either had to be sent to an expert gunsmith for custom repairs, or discarded and replaced by another firearm. During the 18th and early-19th centuries, the idea of replacing these methods with a system of interchangeable manufacture gradually developed. The development took decades and involved many people. Gribeauval provided patronage to Honoré Blanc, who attempted to implement the at the musket level. By around 1778, Honoré Blanc began producing some of the first firearms with interchangeable flintlock mechanisms, although they were carefully made by craftsmen. Blanc demonstrated in front of a committee of scientists that his muskets could be fitted with flintlock mechanisms picked at random from a pile of parts. In 1785 muskets with interchangeable locks caught the attention of the United States' Ambassador to France, Thomas Jefferson, through the efforts of Honoré Blanc. Jefferson tried unsuccessfully to persuade Blanc to move to America, then wrote to the American Secretary of War with the idea, and when he returned to the USA he worked to fund its development. President George Washington approved of the concept, and in 1798 Eli Whitney signed a contract to mass-produce 12,000 muskets built under the new system. Between 4th July 1793 and 25th November 1795, the London gunsmith Henry Nock delivered 12,010 'screwless' or 'Duke's' locks to the British Board of Ordnance. These locks were intended to be interchangeable, being manufactured in large volumes in a steam-powered factory using gauges and lathes. Subsequent experiments have suggested that the lock's components were interchangeable at a higher rate than those of the later British New Land Pattern musket and the American M1816 musket. Louis de Tousard, who fled the French Revolution, joined the U.S. Corp of Artillerists in 1795 and wrote an influential artillerist's manual that stressed the importance of standardization. Implementation Numerous inventors began to try to implement the principle Blanc had described. The development of the machine tools and manufacturing practices required would be a great expense to the U.S. Ordnance Department, and for some years while trying to achieve interchangeability, the firearms produced cost more to manufacture. By 1853, there was evidence that interchangeable parts, then perfected by the Federal Armories, led to savings. The Ordnance Department freely shared the techniques used with outside suppliers. Eli Whitney and an early attempt In the US, Eli Whitney saw the potential benefit of developing "interchangeable parts" for the firearms of the United States military. In July 1801 he built ten guns, all containing the same exact parts and mechanisms, then disassembled them before the United States Congress. He placed the parts in a mixed pile and, with help, reassembled all of the firearms in front of Congress, much as Blanc had done some years before. The Congress was captivated and ordered a standard for all United States equipment. The use of interchangeable parts removed the problems of earlier eras concerning the difficulty or impossibility of producing new parts for old equipment. If one firearm part failed, another could be ordered, and the firearm would not need to be discarded. The catch was that Whitney's guns were costly and handmade by skilled workmen. Charles Fitch credited Whitney with successfully executing a firearms contract with interchangeable parts using the American System, but historians Merritt Roe Smith and Robert B. Gordon have since determined that Whitney never actually achieved interchangeable parts manufacturing. His family's arms company, however, did so after his death. Brunel's sailing blocks Mass production using interchangeable parts was first achieved in 1803 by Marc Isambard Brunel in cooperation with Henry Maudslay and Simon Goodrich, under the management of (and with contributions by) Brigadier-General Sir Samuel Bentham, the Inspector General of Naval Works at Portsmouth Block Mills, Portsmouth Dockyard, Hampshire, England. At the time, the Napoleonic War was at its height, and the Royal Navy was in a state of expansion that required 100,000 pulley blocks to be manufactured a year. Bentham had already achieved remarkable efficiency at the docks by introducing power-driven machinery and reorganising the dockyard system. Marc Brunel, a pioneering engineer, and Maudslay, a founding father of machine tool technology who had developed the first industrially practical screw-cutting lathe in 1800 which standardized screw thread sizes for the first time, collaborated on plans to manufacture block-making machinery; the proposal was submitted to the Admiralty who agreed to commission his services. By 1805, the dockyard had been fully updated with the revolutionary, purpose-built machinery at a time when products were still built individually with different components. A total of 45 machines were required to perform 22 processes on the blocks, which could be made in three different sizes. The machines were almost entirely made of metal, thus improving their accuracy and durability. The machines would make markings and indentations on the blocks to ensure alignment throughout the process. One of the many advantages of this new method was the increase in labour productivity due to the less labour-intensive requirements of managing the machinery. Richard Beamish, assistant to Brunel's son and engineer, Isambard Kingdom Brunel, wrote: So that ten men, by the aid of this machinery, can accomplish with uniformity, celerity and ease, what formerly required the uncertain labour of one hundred and ten. By 1808, annual production had reached 130,000 blocks and some of the equipment was still in operation as late as the mid-twentieth century. Terry's clocks: success in wood Eli Terry was using interchangeable parts using a milling machine as early as 1800. Ward Francillon, a horologist, concluded in a study that Terry had already accomplished interchangeable parts as early as 1800. The study examined several of Terry's clocks produced between 1800–1807. The parts were labelled and interchanged as needed. The study concluded that all clock pieces were interchangeable. The very first mass production using interchangeable parts in America was Eli Terry's 1806 Porter Contract, which called for the production of 4000 clocks in three years. During this contract, Terry crafted four-thousand wooden gear tall case movements, at a time when the annual average was about a dozen. Unlike Eli Whitney, Terry manufactured his products without government funding. Terry saw the potential of clocks becoming a household object. With the use of a milling machine, Terry was able to mass-produce clock wheels and plates a few dozen at the same time. Jigs and templates were used to make uniform pinions, so that all parts could be assembled using an assembly line. North and Hall: success in metal The crucial step toward interchangeability in metal parts was taken by Simeon North, working only a few miles from Eli Terry. North created one of the world's first true milling machines to do metal shaping that had been done by hand with a file. Diana Muir believes that North's milling machine was online around 1816. Muir, Merritt Roe Smith, and Robert B. Gordon all agree that before 1832 both Simeon North and John Hall were able to mass-produce complex machines with moving parts (guns) using a system that entailed the use of rough-forged parts, with a milling machine that milled the parts to near-correct size, and that were then "filed to gage by hand with the aid of filing jigs." Historians differ over the question of whether Hall or North made the crucial improvement. Merrit Roe Smith believes that it was done by Hall. Muir demonstrates the close personal ties and professional alliances between Simeon North and neighbouring mechanics mass-producing wooden clocks to argue that the process for manufacturing guns with interchangeable parts was most probably devised by North in emulation of the successful methods used in mass-producing clocks. It may not be possible to resolve the question with absolute certainty unless documents now unknown should surface in the future. Late 19th and early 20th centuries: dissemination throughout manufacturing Skilled engineers and machinists, many with armoury experience, spread interchangeable manufacturing techniques to other American industries, including clockmakers and sewing machine manufacturers Wilcox and Gibbs and Wheeler and Wilson, who used interchangeable parts before 1860. Late to adopt the interchangeable system were Singer Corporation sewing machine (1860s-70s), reaper manufacturer McCormick Harvesting Machine Company (1870s–1880s) and several large steam engine manufacturers such as Corliss (mid-1880s) as well as locomotive makers. Typewriters followed some years later. Then large scale production of bicycles in the 1880s began to use the interchangeable system. During these decades, true interchangeability grew from a scarce and difficult achievement into an everyday capability throughout the manufacturing industries. In the 1950s and 1960s, historians of technology broadened the world's understanding of the history of the development. Few people outside that academic discipline knew much about the topic until as recently as the 1980s and 1990s, when the academic knowledge began finding wider audiences. As recently as the 1960s, when Alfred P. Sloan published his famous memoir and management treatise, My Years with General Motors, even the long-time president and chair of the largest manufacturing enterprise that had ever existed knew very little about the history of the development, other than to say that: [Henry M. Leland was], I believe, one of those mainly responsible for bringing the technique of interchangeable parts into automobile manufacturing. […] It has been called to my attention that Eli Whitney, long before, had started the development of interchangeable parts in connection with the manufacture of guns, a fact which suggests a line of descent from Whitney to Leland to the automobile industry. One of the better-known books on the subject, which was first published in 1984 and has enjoyed a readership beyond academia, has been David A. Hounshell's From the American System to Mass Production, 1800–1932: The Development of Manufacturing Technology in the United States.
Technology
Basics_6
null
908738
https://en.wikipedia.org/wiki/Gums
Gums
The gums or gingiva (: gingivae) consist of the mucosal tissue that lies over the mandible and maxilla inside the mouth. Gum health and disease can have an effect on general health. Structure The gums are part of the soft tissue lining of the mouth. They surround the teeth and provide a seal around them. Unlike the soft tissue linings of the lips and cheeks, most of the gums are tightly bound to the underlying bone which helps resist the friction of food passing over them. Thus when healthy, it presents an effective barrier to the barrage of periodontal insults to deeper tissue. Healthy gums are usually coral pink in light skinned people, and may be naturally darker with melanin pigmentation. Changes in color, particularly increased redness, together with swelling and an increased tendency to bleed, suggest an inflammation that is possibly due to the accumulation of bacterial plaque. Overall, the clinical appearance of the tissue reflects the underlying histology, both in health and disease. When gum tissue is not healthy, it can provide a gateway for periodontal disease to advance into the deeper tissue of the periodontium, leading to a poorer prognosis for long-term retention of the teeth. Both the type of periodontal therapy and homecare instructions given to patients by dental professionals and restorative care are based on the clinical conditions of the tissue. The gums are divided anatomically into marginal, attached and interdental areas. Marginal gums The marginal gum is the edge of the gums surrounding the teeth in collar-like fashion. In about half of individuals, it is demarcated from the adjacent, attached gums by a shallow linear depression, the free gingival groove. This slight depression on the outer surface of the gum does not correspond to the depth of the gingival sulcus but instead to the apical border of the junctional epithelium. This outer groove varies in depth according to the area of the oral cavity. The groove is very prominent on mandibular anteriors and premolars. The marginal gum varies in width from 0.5 to 2.0 mm from the free gingival crest to the attached gingiva. The marginal gingiva follows the scalloped pattern established by the contour of the cementoenamel junction (CEJ) of the teeth. The marginal gingiva has a more translucent appearance than the attached gingiva, yet has a similar clinical appearance, including pinkness, dullness, and firmness. In contrast, the marginal gingiva lacks the presence of stippling, and the tissue is mobile or free from the underlying tooth surface, as can be demonstrated with a periodontal probe. The marginal gingiva is stabilized by the gingival fibers that have no bony support. The gingival margin, or free gingival crest, at the most superficial part of the marginal gingiva, is also easily seen clinically, and its location should be recorded on a patient's chart. Attached gum The attached gums are continuous with the marginal gum. It is firm, resilient, and tightly bound to the underlying periosteum of alveolar bone. The facial aspect of the attached gum extends to the relatively loose and movable alveolar mucosa, from which it is demarcated by the mucogingival junction. Attached gum may present with surface stippling. The tissue when dried is dull, firm, and immobile, with varying amounts of stippling. The width of the attached gum varies according to its location. The width of the attached gum on the facial aspect differs in different areas of the mouth. It is generally greatest in the incisor region (3.5 to 4.5 mm in the maxilla and 3.3 to 3.9 mm in the mandible) and less in the posterior segments, with the least width in the first premolar area (1.9 mm in the maxilla and 1.8 mm in the mandible). However, certain levels of attached gum may be necessary for the stability of the underlying root of the tooth. Interdental gum The interdental gum lies between the teeth. They occupy the gingival embrasure, which is the interproximal space beneath the area of tooth contact. The interdental papilla can be pyramidal or have a "col" shape. Attached gums are resistant to the forces of chewing and covered in keratin. The col varies in depth and width, depending on the expanse of the contacting tooth surfaces. The epithelium covering the col consists of the marginal gum of the adjacent teeth, except that it is nonkeratinized. It is mainly present in the broad interdental gingiva of the posterior teeth, and generally is not present with those interproximal tissue associated with anterior teeth because the latter tissue is narrower. In the absence of contact between adjacent teeth, the attached gum extends uninterrupted from the facial to the lingual aspect. The col may be important in the formation of periodontal disease but is visible clinically only when teeth are extracted. Characteristics of healthy gums Color Healthy gums usually have a color that has been described as "coral pink". Other colours like red, white, and blue can signify inflammation (gingivitis) or pathology. Smoking or drug use can cause discoloring as well (such as "meth mouth"). Although described as the colour coral pink, variation in colour is possible. This can be the result of factors such as: thickness and degree of keratinization of the epithelium, blood flow to the gums, natural pigmentation of the skin, disease, and medications. Since the colour of the gums can vary, uniformity of colour is more important than the underlying color itself. Excess deposits of melanin can cause dark spots or patches on the gums (melanin gingival hyperpigmentation), especially at the base of the interdental papillae. Gum depigmentation (aka gum bleaching) is a procedure used in cosmetic dentistry to remove these discolorations. Contour Healthy gums have a smooth curved or scalloped appearance around each tooth. Healthy gums fill and fit each space between the teeth, unlike the swollen gum papilla seen in gingivitis or the empty interdental embrasure seen in periodontal disease. Healthy gums hold tight to each tooth in that the gum surface narrows to "knife-edge" thin at the free gingival margin. On the other hand, inflamed gums have a "puffy" or "rolled" margin. Texture Healthy gums have a firm texture that is resistant to movement, and the surface texture often exhibits surface stippling. Unhealthy gums, on the other hand, are often swollen and less firm. Healthy gums have an orange-peel like texture to it due to the stippling. Reaction to disturbance Healthy gums usually have no reaction to normal disturbance such as brushing or periodontal probing. Unhealthy gums, conversely, will show bleeding on probing (BOP) and/or purulent exudate. Clinical significance The gingival cavity microecosystem, fueled by food residues and saliva, can support the growth of many microorganisms, of which some can be injurious to health. Improper or insufficient oral hygiene can thus lead to many gum and periodontal disorders, including gingivitis or periodontitis, which are major causes for tooth failure. Recent studies have also shown that anabolic steroids are also closely associated with gingival enlargement requiring a gingivectomy for many cases. Gingival recession is when there is an apical movement of the gum margin away from the biting (occlusal) surface. It may indicate an underlying inflammation such as periodontitis or pyorrhea, a pocket formation, dry mouth or displacement of the marginal gums away from the tooth by mechanical (such as brushing), chemical, or surgical means. Gingival retraction, in turn, may expose the dental neck and leave it vulnerable to the action of external stimuli, and may cause root sensitivity.
Biology and health sciences
Gastrointestinal tract
Biology
909019
https://en.wikipedia.org/wiki/Sun-synchronous%20orbit
Sun-synchronous orbit
A Sun-synchronous orbit (SSO), also called a heliosynchronous orbit, is a nearly polar orbit around a planet, in which the satellite passes over any given point of the planet's surface at the same local mean solar time. More technically, it is an orbit arranged so that it precesses through one complete revolution each year, so it always maintains the same relationship with the Sun. Applications A Sun-synchronous orbit is useful for imaging, reconnaissance, and weather satellites, because every time that the satellite is overhead, the surface illumination angle on the planet underneath it is nearly the same. This consistent lighting is a useful characteristic for satellites that image the Earth's surface in visible or infrared wavelengths, such as weather and spy satellites, and for other remote-sensing satellites, such as those carrying ocean and atmospheric remote-sensing instruments that require sunlight. For example, a satellite in Sun-synchronous orbit might ascend across the equator twelve times a day, each time at approximately 15:00 mean local time. Special cases of the Sun-synchronous orbit are the noon/midnight orbit, where the local mean solar time of passage for equatorial latitudes is around noon or midnight, and the dawn/dusk orbit, where the local mean solar time of passage for equatorial latitudes is around sunrise or sunset, so that the satellite rides the terminator between day and night. Riding the terminator is useful for active radar satellites, as the satellites' solar panels can always see the Sun, without being shadowed by the Earth. It is also useful for some satellites with passive instruments that need to limit the Sun's influence on the measurements, as it is possible to always point the instruments towards the night side of the Earth. The dawn/dusk orbit has been used for solar-observing scientific satellites such as TRACE, Hinode and PROBA-2, affording them a nearly continuous view of the Sun. Orbital precession A Sun-synchronous orbit is achieved by having the osculating orbital plane precess (rotate) approximately one degree eastward each day with respect to the celestial sphere to keep pace with the Earth's movement around the Sun. This precession is achieved by tuning the inclination to the altitude of the orbit (see Technical details) such that Earth's equatorial bulge, which perturbs inclined orbits, causes the orbital plane of the spacecraft to precess with the desired rate. The plane of the orbit is not fixed in space relative to the distant stars, but rotates slowly about the Earth's axis. Typical Sun-synchronous orbits around Earth are about in altitude, with periods in the 96–100-minute range, and inclinations of around 98°. This is slightly retrograde compared to the direction of Earth's rotation: 0° represents an equatorial orbit, and 90° represents a polar orbit. Sun-synchronous orbits are possible around other oblate planets, such as Mars. A satellite orbiting a planet such as Venus that is almost spherical will need an outside push to maintain a Sun-synchronous orbit. Technical details The angular precession per orbit for an Earth orbiting satellite is approximately given by where is the coefficient for the second zonal term related to the oblateness of the Earth, is the mean radius of the Earth, is the semi-latus rectum of the orbit, is the inclination of the orbit to the equator. An orbit will be Sun-synchronous when the precession rate equals the mean motion of the Earth about the Sun , which is 360° per sidereal year (), so we must set , where is the Earth orbital period, while is the period of the spacecraft around the Earth. As the orbital period of a spacecraft is where is the semi-major axis of the orbit, and is the standard gravitational parameter of the planet ( for Earth); as for a circular or almost circular orbit, it follows that or when is 360° per year, As an example, with = , i.e., for an altitude ≈ of the spacecraft over Earth's surface, this formula gives a Sun-synchronous inclination of 98.7°. Note that according to this approximation equals −1 when the semi-major axis equals , which means that only lower orbits can be Sun-synchronous. The period can be in the range from 88 minutes for a very low orbit ( = , = 96°) to 3.8 hours ( = , but this orbit would be equatorial, with = 180°). A period longer than 3.8 hours may be possible by using an eccentric orbit with < but > . If one wants a satellite to fly over some given spot on Earth every day at the same hour, the satellite must complete a whole number of orbits per day. Assuming a circular orbit, this comes down to between 7 and 16 orbits per day, as doing less than 7 orbits would require an altitude above the maximum for a Sun-synchronous orbit, and doing more than 16 would require an orbit inside the Earth's atmosphere or surface. The resulting valid orbits are shown in the following table. (The table has been calculated assuming the periods given. The orbital period that should be used is actually slightly longer. For instance, a retrograde equatorial orbit that passes over the same spot after 24 hours has a true period about ≈ 1.0027 times longer than the time between overpasses. For non-equatorial orbits the factor is closer to 1.) {| class="wikitable" style="text-align:right;" ! Orbits per day ! colspan=2 | Period (h) ! Altitude (km) ! Maximal latitude ! Inclin-ation |- | 16 || || = 1:30 || 274 || 83.4° || 96.6° |- | 15 || || = 1:36 || 567 || 82.3° || 97.7° |- | 14 || || ≈ 1:43 || 894 || 81.0° || 99.0° |- | 13 || || ≈ 1:51 || 1262 || 79.3° || 100.7° |- | 12 || 2 || || 1681 || 77.0° || 103.0° |- | 11 || || ≈ 2:11 || 2162 || 74.0° || 106.0° |- | 10 || || = 2:24 || 2722 || 69.9° || 110.1° |- | 9 || || = 2:40 || 3385 || 64.0° || 116.0° |- | 8 || 3 || || 4182 || 54.7° || 125.3° |- | 7 || || ≈ 3:26 || 5165 || 37.9° || 142.1° |} When one says that a Sun-synchronous orbit goes over a spot on the Earth at the same local time each time, this refers to mean solar time, not to apparent solar time. The Sun will not be in exactly the same position in the sky during the course of the year (see Equation of time and Analemma). Sun-synchronous orbits are mostly selected for Earth observation satellites, with an altitude typically between 600 and over the Earth surface. Even if an orbit remains Sun-synchronous, however, other orbital parameters such as argument of periapsis and the orbital eccentricity evolve, due to higher-order perturbations in the Earth's gravitational field, the pressure of sunlight, and other causes. Earth observation satellites, in particular, prefer orbits with constant altitude when passing over the same spot. Careful selection of eccentricity and location of perigee reveals specific combinations where the rate of change of perturbations are minimized, and hence the orbit is relatively stable a frozen orbit, where the motion of position of the periapsis is stable. The ERS-1, ERS-2 and Envisat of European Space Agency, as well as the MetOp spacecraft of EUMETSAT and RADARSAT-2 of the Canadian Space Agency, are all operated in such Sun-synchronous frozen orbits.
Physical sciences
Orbital mechanics
Astronomy
909402
https://en.wikipedia.org/wiki/Aerial%20root
Aerial root
Aerial roots are roots growing above the ground. They are often adventitious, i.e. formed from nonroot tissue. They are found in diverse plant species, including epiphytes such as orchids (Orchidaceae), tropical coastal swamp trees such as mangroves, banyan figs (Ficus subg. Urostigma), the warm-temperate rainforest rata (Metrosideros robusta), and pōhutukawa trees of New Zealand (Metrosideros excelsa). Vines such as common ivy (Hedera helix) and poison ivy (Toxicodendron radicans) also have aerial roots. Types This plant organ that is found in so many diverse plant-families has different specializations that suit the plant-habitat. In general growth-form, they can be technically classed as negatively gravitropic (grows up and away from the ground) or positively gravitropic (grows down toward the ground). "Stranglers" (prop-root) Banyan trees are an example of a strangler fig that begins life as an epiphyte in the crown of another tree. Their roots grow down and around the stem of the host, their growth accelerating once the ground has been reached. Over time, the roots coalesce to form a pseudotrunk, which may give the appearance that it is strangling the host. Another strangler that begins life as an epiphyte is the Moreton Bay fig (Ficus macrophylla) of tropical and subtropical eastern Australia, which has powerfully descending aerial roots. In the subtropical to warm-temperate rainforests of northern New Zealand, Metrosideros robusta, the rata tree, sends aerial roots down several sides of the trunk of the host. From these descending roots, horizontal roots grow out to girdle the trunk and fuse with the descending roots. In some cases, the "strangler" outlives the host tree, leaving as its only trace a hollow core in the massive pseudotrunk of the rata. Pneumatophores These specialized aerial roots enable plants to breathe air in habitats with waterlogged soil. The roots may grow downward from the stem or upward from typical roots. Some botanists classify them as aerating roots rather than aerial roots if they emerge from the soil. The surface of these roots is covered with porous lenticels, which lead to air-filled spongy tissue called aerenchyma. This tissue facilitates the diffusion of gases throughout the plant, as oxygen diffusion coefficient in air is four orders of magnitude greater than in water. Pneumatophores differentiate the black mangrove and grey mangrove from other mangrove species. Fishers in some areas of Southeast Asia make corks for fishing nets by shaping the pneumatophores of mangrove apples (Sonneratia caseolaris) into small floats. Members of the subfamily Taxodioideae produce woody above-ground structures, known as cypress knees, that project upward from their horizontal roots. One hypothesis suggests that these structures function as pneumatophores, facilitating gas exchange in waterlogged soils. However, modern research has largely discredited this idea, as the knees lack aerenchyma and gas exchange through them is not significant. Their true functions remain unclear, with alternative theories proposing roles such as nutrient acquisition or storage, structural support, or erosion prevention. Haustorial roots These roots are found in parasitic plants, where aerial roots become cemented to the host plant via a sticky attachment disc before intruding into the tissues of the host. Mistletoe is an example of this. Propagative roots Adventitious roots usually develop from plantlet nodes formed via horizontal, above ground stems, termed stolons, e.g., strawberry runners, and spider plant. Some leaves develop adventitious buds, which then form adventitious roots, e.g. piggyback plant (Tolmiea menziesii) and mother-of-thousands (Kalanchoe daigremontiana). The adventitious plantlets then drop off the parent plant and develop as separate clones of the parent. Pumping and physiology Aerial roots may receive water and nutrient intake from the air. There are many types of aerial roots; some, such as mangrove, are used for aeration and not for water absorption. In other cases, they are used mainly for structure, and in order to reach the surface. Many plants rely on the leaf system for gathering the water into pockets, or onto scales. These roots function as terrestrial roots do. Most aerial roots directly absorb the moisture from fog or humid air. Some surprising results in studies on aerial roots of orchids show that the velamen (the white spongy envelope of the aerial roots), are actually totally waterproof, preventing water loss but not allowing any water in. Once reaching and touching a surface, the velamen is not produced in the contact area, allowing the root to absorb water like terrestrial roots. Many other epiphytes - non-parasitic or semi-parasitic plants living on the surface of other plants - have developed cups and scales that gather rainwater or dew. The aerial roots in this case work as regular surface roots. There are also several types of roots, creating a cushion where a high humidity is retained. Some of the aerial roots, especially in the genus Tillandsia, have a physiology that collects water from humidity, and absorbs it directly. In the Sierra Mixe (named after the geographical area) variety of maize, aerial roots produce a sweet mucus that supports nitrogen fixing bacteria, which supply 30–80 percent of the plant's nitrogen needs. On houseplants Many plants that are commonly grown indoors can develop aerial roots, such as Monstera deliciosa, Pothos (Epipremnum aureum), Rubber Tree (Ficus elastica), Fiddle Leaf Fig (Ficus lyrata), Thaumatophyllum bipinnatifidum, many Philodendron and succulents such as Echeveria. Aerial roots on houseplants do not serve as much of a purpose as on outdoor plants, as there is no rain indoors and indoor humidity is often low due to A/C and heating systems. However, studies have shown that increasing indoor humidity can result in houseplant aerial roots growing longer in length, resulting in lower levels of transpiration and more efficient intake of nitrogen than aroid houseplants grown in standard indoor humidity. Aerial roots on houseplant cuttings increase the chances of successful propagation. The presence of aerial roots is not an indicator of plant health. If a plant does not have aerial roots, that is no reason for concern.
Biology and health sciences
Plant anatomy and morphology: General
Biology
910155
https://en.wikipedia.org/wiki/Jackhammer
Jackhammer
A jackhammer (pneumatic drill or demolition hammer in British English) is a pneumatic or electro-mechanical tool that combines a hammer directly with a chisel. It was invented by William McReavy, who then sold the patent to Charles Brady King. Hand-held jackhammers are generally powered by compressed air, but some are also powered by electric motors. Larger jackhammers, such as rig-mounted hammers used on construction machinery, are usually hydraulically powered. These tools are typically used to break up rock, pavement, and concrete. A jackhammer operates by driving an internal hammer up and down. The hammer is first driven down to strike the chisel and then back up to return the hammer to the original position to repeat the cycle. The effectiveness of the jackhammer is dependent on how much force is applied to the tool. It is generally used like a hammer to break the hard surface or rock in construction works and it is not considered under earth-moving equipment, along with its accessories (i.e., pusher leg, lubricator). History The first steam-powered drill was patented by Samuel Miller in 1806. The drill used steam only for raising the drill. Pneumatic drills were developed in response to the needs of the mining, quarrying, excavating, and tunneling. A pneumatic drill was proposed by C. Brunton in 1844. In 1846, a percussion drill that could be worked by steam, or atmospheric pressure obtained from a vacuum, was patented in Britain by Thomas Clarke, Mark Freeman, and John Varley. The first American "percussion drill" was made in 1848, and patented in 1849 by Jonathan J. Couch of Philadelphia, Pennsylvania. In that drill, the drill bit passed through the piston of a steam engine. The piston snagged the drill bit and hurled it against the rock face. It was an experimental model. In 1849, Couch's assistant, Joseph W. Fowle, filed a patent caveat for a percussion drill of his own design. In Fowle's drill, the drill bit was connected directly to the piston in the steam cylinder; specifically, the drill bit was connected to the piston's crosshead. The drill also had a mechanism for turning the drill bit around its axis between strokes and for advancing the drill as the hole deepened. By 1850 or 1851, Fowle was using compressed air to drive his drill, making it the first true pneumatic drill. The demand for pneumatic drills was driven especially by miners and tunnelers because steam engines needed fires to operate and the ventilation in mines and tunnels was inadequate to vent the fires' fumes. As well, mines and tunnels might contain flammable explosive gases such as methane. There was also no way to convey steam over long distances, such as from the surface to the bottom of a mine, without it condensing. By contrast, compressed air could be conveyed over long distances without loss of its energy, and after the compressed air had been used to power equipment, it could ventilate a mine or tunnel. In Europe since the late 1840s, the king of Sardinia, Carlo Alberto, had been contemplating the excavation of a tunnel through Mount Fréjus to create a rail link between Italy and France, which would cross his realm. The need for a mechanical rock drill was obvious and that sparked research in Europe on pneumatic rock drills. A Frenchman, François Cavé (fr), designed a rock drill that used compressed air, which he patented in 1851. However, the air had to be admitted manually to the cylinder during each stroke, so it was not successful. In 1854, in England, Thomas Bartlett made and then patented (1855) a rock drill, the bit of which was connected directly to the piston of a steam engine. In 1855, Bartlett demonstrated his drill, powered by compressed air, to officials of the Mount Fréjus tunnel project. (In 1855, a German, Schumann, invented a similar pneumatic rock drill in Freiburg, Germany.) By 1861, Bartlett’s drill had been refined by the Savoy-born engineer Germain Sommeiller (1815-1871) and his colleagues, Grandis and Grattoni. Thereafter, many inventors refined the pneumatic drill. Sommeiller took his drill to the lengthy Gotthard Pass Tunnel, then being built to link railways between Switzerland and Italy under the Alps. From there, mining and railway tunnelling expanded. Two equipment manufacturing companies, Atlas Copco and Ingersoll Rand, became dominant in the provision of compressed air drilling apparatus in Europe and America respectively, each holding significant patents. Terminology The word "jackhammer" is used in North American English and Australia, while "pneumatic drill" is used colloquially elsewhere in the English-speaking world, although strictly speaking a "pneumatic drill" refers to a pneumatically driven jackhammer. In Britain, electromechanical versions are colloquially known by the name of "Kangos". The term comes from the former British brand name now owned by Milwaukee tools. Additionally, the terms drill and breaker (demolition hammer) are non-interchangeable and refer to two differing distinct types of jackhammer (regardless of their power source). A breaker cannot rotate its steel (which for example may be either a chisel or spike) and relies on pure percussion shock to fracture and split material without cutting, whereas a (pneumatic/hydraulic) drill both impacts and rotates, which enables a steel with a tungsten carbide tipped bit to cut into hard rock such as granite, typically to create holes for blasting. Normally, only the pneumatic drill would be used at geological operations such as quarrying or mining and only the breaker at civil operations such as construction and road repair. Use A full-sized portable jackhammer is impractical for use against walls and steep slopes, except by a very strong person, as the user would have to both support the weight of the tool and push the tool back against the work after each blow. A technique developed by experienced workers is a two-man team to overcome this obstacle of gravity: one operates the hammer and the second assists by holding the hammer either on his shoulders or cradled in his arms. Both use their combined weight to push the bit into the workface. This method is commonly referred to as horizontal jackhammering. Another method is overhead jackhammering, requiring strength conditioning and endurance to hold a smaller jackhammer, called a rivet buster, over one's head. To make overhead work safer, a platform can be used. One such platform is a positioner–actuator–manipulator (PAM). This unit takes all the weight and vibration from the user. Types Pneumatic A pneumatic jackhammer, also known as a or , is a jackhammer that uses compressed air as the power source. The air supply usually comes from a portable air compressor driven by a diesel engine. Reciprocating compressors were formerly used. The unit comprised a reciprocating compressor driven, through a centrifugal clutch, by a diesel engine. The engine's governor provided only two speeds: idling, when the clutch was disengaged maximum, when the clutch was engaged and the compressor was running Modern versions use rotary screw compressors and have more sophisticated variable governors. The unit is usually mounted on a trailer and sometimes includes an electrical generator to supply lights or electric power tools. Additionally, some users of pneumatic jackhammers may use a pneumatic lubricator which is placed in series with the air hose powering the air hammer. This increases the life and performance of the jackhammer. Electro mechanical or electropneumatic An electropneumatic hammer is often called a rotary hammer because it has an electric motor, which rotates a crank. The hammer has two pistons – a drive piston and a free-flight piston. The crank moves the drive piston back and forth in the same cylinder as the flight piston. The drive piston never touches the flight piston. Instead, the drive piston compresses air in the cylinder, which then propels the flight piston against a striker, which contacts the drill bit. Electric powered tools come in a variety of sizes, about . They require an external power source but do not require a compressor. Although in the past these tools did not have the power of an air or pneumatic hammer, this is changing with newer brushless-motor tools coming close to the power of a pneumatic tool and in some cases even matching it. Electric-powered tools are useful for locations where access to a compressor is limited or impractical, such as inside a building, in a crowded construction site, or in a remote location and it is not uncommon under earth moving equipment or tool. Electropneumatic tools use a variety of chucks for attaching chisels, but the most common are SDS-max, 7/8 in hex, TE-S, and 1+1/8 in hex. The connection end size is also related to the breaking energy of the tool. For example, the Bosch and Hilti tools both use SDS-max, while the Bosch, Hilti, and Makita tools all use 1+1/8 in hex connection. See hammer drills for more on electropneumatic hammering. Hydraulic A hydraulic breaker may be fitted to heavy equipment such as an excavator or backhoe, and is widely used for roadwork, quarrying, construction sitework, and general demolition. These larger machine mounted breakers are known as rig mounted, or machine mounted breakers. Such tools can be used horizontally, as they do not require the assist of gravity to do their work. They typically use a hydraulic motor driving a sealed pneumatic hammer system, as a hydraulically activated hammer would both develop a low strike speed and potentially transfer unacceptable shock loads to the pump system. Contrast this with a steam, mechanical, or hydraulically driven pile driver. Advances in technology have allowed for portable hydraulic breakers. The jackhammer is connected with hydraulic hoses to a portable hydraulic powerpack: either a petrol or diesel engine driving a hydraulic pump; or a mini-excavator or skid-steer via a power take-off driveshaft to the machine's hydraulic system. Hydraulic power sources are more efficient than air compressors, making the kit smaller, cheaper or more powerful than a comparable pneumatic version. Pneumatic or hydraulic tools are particularly likely to be used in underground mines where there is an explosion risk (such as with coal), since they do not require high-voltage electricity to work, eliminating much of the danger of spark-induced detonation. Bits Bit types include: Spade – provides flat finish for concrete or edging in asphalt or dirt. Flat tip – allows direction control or finer edge finish Point – general breaking Stake driver – drives concrete form stakes Scabbler – finishes surface smooth or for cleaning prior to bonding Flex chisel – flexible metal blade (attached to shank with bolts) for tile removal and scraping Bushing tool – multiple carbide points for cleaning up seams and knocking down rough spots in concrete Sharpening: chisels may be resharpened in a shop or with an angle grinder with grinding disc. After resharpening, they must then be heat treated to restore the integrity of the steel before use. Self-sharpening polygon and flat chisels are also available. Health effects The sound of the hammer blows, combined with the explosive air exhaust, makes pneumatic jackhammers dangerously loud, emitting more than 120 dB SPL near the operator’s ears. Sound-blocking earmuffs and earplugs must be worn by the operator to prevent a form of hearing loss, of which tinnitus is the main symptom. Although some pneumatic jackhammers now have a silencer around the barrel of the tool, loud air exhaust, hammer blows themselves, and compressor engine sounds remain unmuffled. Use has been linked to Raynaud syndrome; in particular, prolonged exposure to the pronounced vibration conducted by the tool can lead to a secondary form of the syndrome known as vibration white finger. Applying athletic tape is not very effective in preventing white finger but seems to help alleviate some of its discomfort. Pneumatic drill usage can also lead to a predisposition for the development of carpal tunnel syndrome. Some manufacturers of electro-pneumatic tools now offer vibration reduction systems to reduce the vibration felt by the operator. For example, Hilti manufactures a jackhammer model that has approximately the same impact energy of a pneumatic hammer, but the vibration felt by the operator is significantly less (7 m/s2). Other manufacturers such as Makita, DeWalt and Bosch also offer electric tools with vibration dampening. In addition, using a jackhammer to break up concrete pavement may expose the operator to hazardous dust containing respirable crystalline silica that may induce silicosis. The operator and those in the vicinity of the jackhammer operations should wear personal protective equipment, including an OSHA-approved respirator (US).
Technology
Hydraulics and pneumatics
null
549146
https://en.wikipedia.org/wiki/Warehouse
Warehouse
A warehouse is a building for storing goods. Warehouses are used by manufacturers, importers, exporters, wholesalers, transport businesses, customs, etc. They are usually large plain buildings in industrial parks on the outskirts of cities, towns, or villages. Warehouses usually have loading docks to load and unload goods from trucks. Sometimes warehouses are designed for the loading and unloading of goods directly from railways, airports, or seaports. They often have cranes and forklifts for moving goods, which are usually placed on ISO standard pallets and then loaded into pallet racks. Stored goods can include any raw materials, packing materials, spare parts, components, or finished goods associated with agriculture, manufacturing, and production. In India and Hong Kong, a warehouse may be referred to as a godown. There are also godowns in the Shanghai Bund. Warehousing in the past Prehistory and ancient history A warehouse can be defined functionally as a building in which to store bulk produce or goods (wares) for commercial purposes. The built form of warehouse structures throughout time depends on many contexts: materials, technologies, sites, and cultures. In this sense, the warehouse postdates the need for communal or state-based mass storage of surplus food. Prehistoric civilizations relied on family- or community-owned storage pits, or 'palace' storerooms, such as at Knossos, to protect surplus food. The archaeologist Colin Renfrew argued that gathering and storing agricultural surpluses in Bronze Age Minoan 'palaces' was a critical ingredient in the formation of proto-state power. The need for warehouses developed in societies in which trade reached a critical mass requiring storage at some point in the exchange process. This was highly evident in ancient Rome, where the horreum (pl. horrea) became a standard building form. The most studied examples are in Ostia, the port city that served Rome. The Horrea Galbae, a warehouse complex on the road towards Ostia, demonstrates that these buildings could be substantial, even by modern standards. Galba's horrea complex contained 140 rooms on the ground floor alone, covering an area of some 225,000 square feet (21,000 m2). As a point of reference, less than half of U.S. warehouses today are larger than 100,000 square feet (9290 m2). Medieval Europe The need for a warehouse implies having quantities of goods too big to be stored in a domestic storeroom. But as attested by legislation concerning the levy of duties, some medieval merchants across Europe commonly kept goods in their large household storerooms, often on the ground floor or cellars. An example is the Fondaco dei Tedeschi, the substantial quarters of German traders in Venice, which combined a dwelling, warehouse, market and quarters for travellers. From the Middle Ages on, dedicated warehouses were constructed around ports and other commercial hubs to facilitate large-scale trade. The warehouses of the trading port Bryggen in Bergen, Norway (now a World Heritage Site), demonstrate characteristic European gabled timber forms dating from the late Middle Ages, though what remains today was largely rebuilt in the same traditional style following great fires in 1702 and 1955. Industrial Revolution During the Industrial Revolution of the mid 18th century, the function of warehouses evolved and became more specialised. The mass production of goods launched by the industrial revolution of the 18th and 19th centuries fuelled the development of larger and more specialised warehouses, usually located close to transport hubs on canals, at railways and portside. Specialisation of tasks is characteristic of the factory system, which developed in British textile mills and potteries in the mid-late 1700s. Factory processes sped up work and deskilled labour, bringing new profits to capital investment. Warehouses also fulfill a range of commercial functions besides simple storage, exemplified by Manchester's cotton warehouses and Australian wool stores: receiving, stockpiling and despatching goods; displaying goods for commercial buyers; packing, checking and labelling orders, and dispatching them. The utilitarian architecture of warehouses responded fast to emerging technologies. Before and into the nineteenth century, the basic European warehouse was built of load-bearing masonry walls or heavy-framed timber with a suitable external cladding. Inside, heavy timber posts supported timber beams and joists for the upper levels, rarely more than four to five stories high. A gabled roof was conventional, with a gate in the gable facing the street, rail lines or port for a crane to hoist goods into the window-gates on each floor below. Convenient access for road transport was built-in via very large doors on the ground floor. If not in a separate building, office and display spaces were located on the ground or first floor. Technological innovations of the early 19th century changed the shape of warehouses and the work performed inside them: cast iron columns and later, moulded steel posts; saw-tooth roofs; and steam power. All (except steel) were adopted quickly and were in common use by the middle of the 19th century. Strong, slender cast iron columns began to replace masonry piers or timber posts to carry levels above the ground floor. As modern steel framing developed in the late 19th century, its strength and constructibility enabled the first skyscrapers. Steel girders replaced timber beams, increasing the span of internal bays in the warehouse. The saw-tooth roof brought natural light to the top story of the warehouse. It transformed the shape of the warehouse, from the traditional peaked hip or gable to an essentially flat roof form that was often hidden behind a parapet. Warehouse buildings now became strongly horizontal. Inside the top floor, the vertical glazed pane of each saw-tooth enabled natural lighting over displayed goods, improving buyer inspection. Hoists and cranes driven by steam power expanded the capacity of manual labour to lift and move heavy goods. 20th century Two new power sources, hydraulics, and electricity, re-shaped warehouse design and practice at the end of the 19th century and into the 20th century. Public hydraulic power networks were constructed in many large industrial cities around the world in the 1870s-80s, exemplified by Manchester. They were highly effective to power cranes and lifts, whose application in warehouses served taller buildings and enabled new labour efficiencies. Public electricity networks emerged in the 1890s. They were used at first mainly for lighting and soon to electrify lifts, making possible taller, more efficient warehouses. It took several decades for electrical power to be distributed widely throughout cities in the western world. 20th-century technologies made warehousing ever more efficient. Electricity became widely available and transformed lighting, security, lifting, and transport from the 1900s. The internal combustion engine, developed in the late 19th century, was installed in mass-produced vehicles from the 1910s. It not only reshaped transport methods but enabled many applications as a compact, portable power plant, wherever small engines were needed. The forklift truck was invented in the early 20th century and came into wide use after World War II. Forklifts transformed the possibilities of multi-level pallet racking of goods in taller, single-level steel-framed buildings for higher storage density. The forklift, and its load fixed to a uniform pallet, enabled the rise of logistic approaches to storage in the later 20th century. Always a building of function, in the late 20th century warehouses began to adapt to standardization, mechanization, technological innovation, and changes in supply chain methods. Here in the 21st century, we are currently witnessing the next major development in warehousing, automation. Warehouse layout A good warehouse layout consist of 5 areas: Loading and unloading area - This is the area where goods are unloaded and loaded into the truck. It could be part of the building or separated from the building. Reception area - Also known as staging area, this a place where the incoming goods are processed and reviewed, and sorted before proceeding to storage. Storage area - This is the area of the warehouse where goods sit as it awaits dispatch. This area can be further subdivided into static storage for goods that will take longer before being dispatched and dynamic storage for goods that sell faster. Picking area - This is the area where goods being dispatched are prepared or modified before being shipped. Shipping area - Once goods have been prepared, they proceed to the packing or shipping area where they await the actual shipping. Typology Warehouses are generally considered industrial buildings and are usually located in industrial districts or zones (such as the outskirts of a city). LoopNet categorizes warehouses using the "industrial" property type. Craftsman Book Company's 2018 National Building Cost Manual lists "Warehouses" under the "Industrial Structures Section." In the UK, warehouses are classified under the Town and Country Planning Act 1990 as the industrial category B8 Storage and distribution. Types of warehouses include storage warehouses, distribution centers (including fulfillment centers and truck terminals), retail warehouses, cold storage warehouses, and flex space. According to Zendeq there are 13 types of warehouses: Public Warehouses Private Warehouses Government Warehouses Bonded Warehouses Distribution Centers Production/Manufacturing Warehouses Cross-Docking Warehouses Cooperative Warehouses Specialized Storage Warehouses Smart/Automated Warehouses Contract Warehouses Reverse Logistics Warehouses Consolidation Warehouses Retail warehouses These displayed goods for the home trade. This would be finished goods- such as the latest cotton blouses or fashion items. Their street frontage was impressive, so they took the styles of Italianate Palazzi. Warehouses are now more technologically oriented and help in linking stocks with the retail store in an accurate way. Richard Cobden's construction in Manchester's Mosley Street was the first palazzo warehouse. There were already seven warehouses on Portland Street when S. & J. Watts & Co. commenced building the elaborate Watts Warehouse of 1855, but four more were opened before it was finished. The Main Benefits of Retail Warehouse Safety and Preservation Trouble-free handling Ensuring Continuous supply of products Easy access for small traders Location advantages Employment Generation Ease of financing Assisting in selling Retail warehouses serve as local and regional store distribution centers. Opportunity to obtain local and regional store distribution centers. Utilizing already established retail warehouses and retail centers. Better competitive advantage for omnichannel needs in niche markets Redevelopment of struggling urban areas, such as those where a local supermarket or a chain supermarket has gone out of business or left town. Freestanding facilities offer dock doors, clear heights compatible with industrial use and ample parking. Challenges of Retail Warehouse Tight profit margins High customer expectation Operational Inefficiency May have higher costs than initially thought. Cool warehouses and cold storage Cold storage preserves agricultural products. Refrigerated storage helps in eliminating sprouting, rotting and insect damage. Edible products are generally not stored for more than one year. Several perishable products require a storage temperature as low as −25 °C. Cold storage helps stabilize market prices and evenly distribute goods both on demand and timely basis. The farmers get the opportunity of producing cash crops to get remunerative prices. The consumers get the supply of perishable commodities with lower fluctuation of prices. Ammonia and Freon compressors are commonly used in cold storage warehouses to maintain the temperature. Ammonia refrigerant is cheaper, easily available, and has a high latent heat of evaporation, but it is also highly toxic and can form an explosive mixture when mixed with fuel oil. Insulation is also important, to reduce the loss of cold and to keep different sections of the warehouse at different temperatures. There are two main types of refrigeration system used in cold storage warehouses: vapor absorption systems (VAS) and vapor-compression systems (VCS). VAS, although comparatively costlier to install, is more economical in operation. The temperature necessary for preservation depends on the storage time required and the type of product. In general, there are three groups of products, foods that are alive (e.g. fruits and vegetables), foods that are no longer alive and have been processed in some form (e.g. meat and fish products), and commodities that benefit from storage at controlled temperature (e.g. beer, tobacco). Location is important for the success of a cold storage facility. It should be in close proximity to a growing area as well as a market, be easily accessible for heavy vehicles, and have an uninterrupted power supply. Plant attached cold storage is the preferred option for some manufacturers who want to keep their cold storage in house. Products can be transported via conveyor straight from manufacturing to a dedicated cold storage facility on-site. Overseas warehouses These catered for the overseas trade. They became the meeting places for overseas wholesale buyers where printed and plain could be discussed and ordered. Trade in cloth in Manchester was conducted by many nationalities. Behrens Warehouse is on the corner of Oxford Street and Portland Street. It was built for Louis Behrens & Son by P Nunn in 1860. It is a four-storey predominantly red brick build with 23 bays along Portland Street and 9 along Oxford Street. The Behrens family were prominent in banking and in the social life of the German Community in Manchester. An Overseas warehouse refers to storage facilities located abroad. It plays a pivotal role in cross-border e-commerce trade, where local businesses transport goods en masse to desired market countries then establish a warehouse to store and distribute the goods in response to local sales demands. This process includes managing tasks such as sorting, packaging, and delivering straight from the local warehouse accordingly. Overseas warehouses can be principally divided into two types: Self-operated and Third-party public service warehouses. Self-operated overseas warehouses are established and administered by the cross-border e-commerce enterprise. They only provide logistics services like warehousing and distribution for their own goods, implying that the entire logistics system of the cross-border e-commerce enterprise is self-controlled. On the other hand, a Third-party public service overseas warehouse is built and run by a separate logistics enterprise. There, they provide services including order sorting, multi-channel delivery, and subsequent transportation for multiple exporting e-commerce companies. This kind of warehouse broadly indicates that the entire e-commerce logistics system is under third-party control. The fundamental business operations in overseas warehouses include the following: 1. Sellers send bulk products from their home country to an overseas warehouse where the staff undertakes inventory and shelving. When a buyer places an order, the seller sends delivery instructions to the warehouse system and local delivery is then executed based on those instructions. 2. In instances of issues with sellers' accounts or incorrect labels, goods need to be returned to the overseas warehouse for correction and re-sale. 3. A common transfer practice combines Amazon's FBA service with third-party overseas warehouses, where goods are initially stored and then intermittently moved to FBA for replenishment, while concurrently shipping from overseas warehouses. 4. The warehouses also handle supplementary services such as product returns and exchanges. By using an overseas warehouse, the delivery speed has certain advantages, which can improve the product price and increase gross profit to a certain extent. At the same time, it can also improve the consumer experience and stimulate the second consumption, so as to improve the overall sales. Packing warehouses Modern term: Fulfillment house The main purpose of packing warehouses was the picking, checking, labelling and packing of goods for export. The packing warehouses: Asia House, India House and Velvet House along Whitworth Street in Manchester were some of the tallest buildings of their time. See List of packing houses. The more efficient the pick and pack process is, the faster items can be shipped to customers. Pick and pack warehousing is the process in which fulfillment centers choose products from shipments and re-package them for distribution. When shipments are received by the warehouse, items are stored and entered into an inventory management system for tracking and accountability. Picking refers to selecting the right quantities of products from warehouse storage. Packing, on the other hand, happens when those products are placed in shipping boxes with appropriate packaging materials, labeled, documented and shipped. Railway warehouses Warehouses were built close to the major stations in railway hubs. The first railway warehouse to be built was opposite the passenger platform at the terminus of the Liverpool and Manchester Railway. There was an important group of warehouses around London Road station (now Piccadilly station).In the 1890s the Great Northern Railway Company's warehouse was completed on Deansgate: this was the last major railway warehouse to be built. The London Warehouse Picadilly was one of four warehouses built by the Manchester, Sheffield and Lincolnshire Railway in about 1865 to service the new London Road Station. It had its own branch to the Ashton Canal. This warehouse was built of brick with stone detailing. It had cast iron columns with wrought iron beams. As part of its diversification activities, CWC developed a warehousing facility of Railway land along a Railway siding as a Pilot Project at Whitefield Goods Terminal at Bangalore after entering into an agreement with Indian Railway. This project started operation since February 2002 and resulted in attracting additional traffic to the Railways, improvement in customer service and an increase in the volumes of cargo handled by CWC. The success of this project led CWC to consider developing Railside Warehousing Complexes at other centers also throughout near identified Rail Terminals. Canal warehouses All these warehouse types can trace their origins back to the canal warehouses which were used for trans-shipment and storage. Castle field warehouses in Manchester are of this type and important as they were built at the terminus of the Bridgewater Canal in 1761. The Duke's Warehouse in the Castle field Canal Basin in Manchester was the first canal warehouse to have the classic design features of internal canal arms, multi-storeys, split level loading, terracing and water powered hoists, and was built between 1769 and 1771. The Castle field terminus of the Bridgewater Canal has attracted a considerable amount of interest from historians since the 1860s, and from archaeologists since the 1960s. These studies have developed two themes; the complexity and success (or failure) of the water management systems built by James Brindley, and the physical development of the basin as a ware-house zones. Operations A customised storage building, a warehouse enables a business to stockpile goods, e.g., to build up a full load prior to transport, or hold unloaded goods before further distribution, or store goods like wine and cheese that require maturation. As a place for storage, the warehouse has to be secure, convenient, and as spacious as possible, according to the owner's resources, the site and contemporary building technology.  Before mechanised technology developed, warehouse functions relied on human labor, using mechanical lifting aids like pulley systems. Breaking it down, warehouse operations covers a number of important areas, from the receiving, organization, fulfillment, and distribution processes. These areas include: Receiving of goods Cross-docking of goods Organizing and storing inventory Attaching asset tracking solutions (like barcodes) to assets and inventory Integrating and maintaining a tracking software, like a warehouse management system Overseeing the integration of new technology Selecting picking routes Establishing sorting and packing practices Maintaining the warehouse facility Developing racking designs and warehouse infrastructure. Storage and shipping systems Some of the most common warehouse storage systems are: Pallet racking including selective, drive-in, drive-thru, double-deep, pushback, and gravity flow Cantilever racking uses arms, rather than pallets, to store long thin objects like timber. Mezzanine adds a semi-permanent story of storage within a warehouse Vertical Lift Modules are packed systems with vertically arranged trays stored on both sides of the unit. Horizontal Carousels consist of a frame and a rotating carriage of bins. Vertical Carousels consisting of a series of carriers mounted on a vertical closed-loop track, inside a metal enclosure. A "piece pick" is a type of order selection process where a product is picked and handled in individual units and placed in an outer carton, tote or another container before shipping. Catalog companies and internet retailers are examples of predominantly piece-pick operations. Their customers rarely order in pallet or case quantities; instead, they typically order just one or two pieces of one or two items. Several elements make up the piece-pick system. They include the order, the picker, the pick module, the pick area, handling equipment, the container, the pick method used and the information technology used. Every movement inside a warehouse must be accompanied by a work order. Warehouse operation can fail when workers move goods without work orders, or when a storage position is left unregistered in the system. One of the most important factors to be considered while designing a warehouse storage plan is the Product Volume. The Products which has high demand and the ones that has to reach the customer or the next workstation in a short span of time has to be kept in places like low storage racks or even near primary aisles, which greatly minimizes the distance to be moved and thus the time consumed. Opposingly, the less frequent moved products can be placed somewhere distant from the primary aisles or even the higher storage racks. Material direction and tracking in a warehouse can be coordinated by a Warehouse Management System (WMS), a database driven computer program. The development of work procedures goes hard in hand with training warehouse personnel. Most firms implement a WMS to standardize work procedure and encourage best practice. These systems facilitate management in their daily planning, organizing, staffing, directing, and controlling the utilization of available resources, to move and store materials into, within, and out of a warehouse, while supporting staff in the performance of material movement and storage in and around a warehouse. Logistics personnel use the WMS to improve warehouse efficiency by directing pathways and to maintain accurate inventory by recording warehouse transactions. Automation and optimization Some warehouses are completely automated, and require only operators to work and handle all the task. Pallets and product move on a system of automated conveyors, cranes and automated storage and retrieval systems coordinated by programmable logic controllers and computers running logistics automation software. These systems are often installed in refrigerated warehouses where temperatures are kept very cold to keep the product from spoiling. This is especially true in electronics warehouses that require specific temperatures to avoid damaging parts. Automation is also common where land is expensive, as automated storage systems can use vertical space efficiently. These high-bay storage areas are often more than 10 meters (33 feet) high, with some over 20 meters (65 feet) high. Automated storage systems can be built up to 40m high. For a warehouse to function efficiently, the facility must be properly slotted. Slotting addresses which storage medium a product is picked from (pallet rack or carton flow), where each item is placed for storage, and how items are picked (pick-to-light, pick-to-voice, or pick-to-paper). With a proper slotting plan, a warehouse can ensure that fast moving items are stored in the most accessible areas or closest to dock areas, improve its inventory rotation requirements, such as first in, first out (FIFO) and last in, first out (LIFO) systems, control labor costs and increase productivity. Pallet racks are commonly used to organize a warehouse. It is important to know the dimensions of racking and the number of bays needed as well as the dimensions of the product to be stored. Clearance should be accounted for if using a forklift or pallet mover to move inventory. To help speed up the receiving of goods, Radio-frequency identification (RFID) portals have been installed at the doors of the warehouses for instant verification of product information like the SKUs and their quantities. RFID technology also ensures precise inventory management and easy goods and equipment tracking. Costs An article by Thomas W. Speh published around 1990 and considered significant by the Warehousing Forum stated that a costing system for warehousing should take account of the space allocated for storage per period of time, and the time taken for handling materials as they are admitted to or discharged from the warehouse. Speh advises businesses on the factors to be taken into account in building up a "handling price" and a "storage price", which will enable an operator to make a profit and take account of risks such as unused space, at the same time as managing buyers' expectations of a fair price. Recent trends Modern warehouses commonly use a system of wide aisle pallet racking to store goods which can be loaded and unloaded using forklift trucks. Traditional warehousing has declined since the last decades of the 20th century, with the gradual introduction of Just In Time (JIT) techniques. The JIT system promotes product delivery directly from suppliers to consumer without the use of warehouses. However, with the gradual implementation of offshore outsourcing and offshoring in about the same time period, the distance between the manufacturer and the retailer (or the parts manufacturer and the industrial plant) grew considerably in many domains, necessitating at least one warehouse per country or per region in any typical supply chain for a given range of products. Recent retailing trends have led to the development of warehouse-style retail stores. These high-ceiling buildings display retail goods on tall, heavy-duty industrial racks rather than conventional retail shelving. Typically, items ready for sale are on the bottom of the racks, and crated or palletized inventory is in the upper rack. Essentially, the same building serves as both a warehouse and retail store. Another trend relates to vendor-managed inventory (VMI). This gives the vendor the control to maintain the level of stock in the store. This method has its own issue that the vendor gains access to the warehouse. Education There are several non-profit organizations which are focused on imparting knowledge, education and research in the field of warehouse management and its role in the supply chain industry. The Warehousing Education and Research Council (WERC) and International Warehouse Logistics Association (IWLA) in Illinois, United States. They provide professional certification and continuing education programs for the industry in the country. The Australian College of Training have government funded programs to provide personal development and continuation training in warehousing certs II – V (Diploma), they operate in Western Australia online and face to face, or Australia wide for online only courses. Safety Warehousing has unique health and safety challenges and has been recognized by the National Institute for Occupational Safety and Health (NIOSH) in the United States as a priority industry sector in the National Occupational Research Agenda (NORA) to identify and provide intervention strategies regarding occupational health and safety issues. Creating a safe and productive warehouse setting starts with a culture of safety. This culture should be reinforced by the managers at all levels, especially executives and owners. Creating a safe working environment begins with a safety plan that covers all parts of the warehouse and applies to all employees. Owners and managers should expect to put resources of time and money toward safety and willingly build these costs into the overall budget. Regular training and inspections should be done to ensure that all employees are knowledgeable in fire safety processes and that all fire safety measures are in place and functioning as required.
Technology
Commercial buildings
null
549468
https://en.wikipedia.org/wiki/Dracaena%20%28plant%29
Dracaena (plant)
Dracaena () is a genus of about 200 species of trees and succulent shrubs. The formerly accepted genera Pleomele and Sansevieria are now included in Dracaena. In the APG IV classification system, it is placed in the family Asparagaceae, subfamily Nolinoideae (formerly the family Ruscaceae). It has also formerly been separated (sometimes with Cordyline) into the family Dracaenaceae or kept in the Agavaceae (now Agavoideae). The name dracaena is derived from the romanized form of the Ancient Greek – drakaina, "female dragon". The majority of the species are native to Africa and the Canary Islands, southern Asia through to northern Australia, with two species in tropical Central America. Description Species of Dracaena have a secondary thickening meristem in their trunk, termed Dracaenoid thickening by some authors, which is quite different from the thickening meristem found in dicotyledonous plants. This characteristic is shared with members of the Agavoideae and Xanthorrhoeoideae among other members of the Asparagales. Dracaena species can be identified in two growth types: treelike dracaenas (Dracaena fragrans, Dracaena draco, Dracaena cinnabari), which have aboveground stems that branch from nodes after flowering, or if the growth tip is severed, and rhizomatous dracaenas (Dracaena trifasciata, Dracaena angolensis), which have underground rhizomes and leaves on the surface (ranging from straplike to cylindrical). Many species of Dracaena are kept as houseplants due to tolerance of lower light and sparse watering. Selected species Dracaena aethiopica (Thunb.) Byng & Christenh. Dracaena afromontana Mildbr. Dracaena aletriformis (Haw.) Bos (syn. D. latifolia) Dracaena americana Donn.Sm. – Central America dragon tree Dracaena angolensis (Welw. ex Carrière) Byng & Christenh. Dracaena angustifolia (Medik.) Roxb. Dracaena arborea (Willd.) Link Dracaena arborescens (Cornu ex Gérôme & Labroy) Byng & Christenh. Dracaena aubrytiana (Carrière) Byng & Christenh. Dracaena aurea H.Mann Dracaena bagamoyensis (N.E.Br.) Byng & Christenh. Dracaena ballyi (L.E.Newton) Byng & Christenh. Dracaena braunii Engl. (syn. D. litoralis) Dracaena cinnabari Balf.f. – Socotra dragon tree Dracaena cochinchinensis (Lour.) S.C.Chen (syn. D. loureiroi) Dracaena draco (L.) L. – Canary Islands dragon tree Dracaena eilensis (Chahin.) Byng & Christenh. Dracaena ellenbeckiana Engl. - Kedong Dracaena (Ethiopia, Kenya, Uganda) Dracaena elliptica Thunb. & Dalm. Dracaena fernaldii (H.St.John) Jankalski Dracaena forbesii (O.Deg.) Jankalski Dracaena fragrans (L.) Ker Gawl. (syn. D. deremensis) – striped dracaena, compact dracaena, corn plant, cornstalk dracaena Dracaena ghiesbreghtii W.Bull ex J.J.Blandy Dracaena goldieana Bullen ex Mast. & T.Moore Dracaena halapepe (H.St.John) Jankalski Dracaena hallii (Chahin.) Byng & Christenh. Dracaena hanningtonii Baker (syn. D. oldupai) Dracaena jayniana Wilkin & Suksathan Dracaena kaweesakii Wilkin & Suksathan Dracaena konaensis (H.St.John) Jankalski Dracaena malawiana Byng & Christenh. Dracaena mannii Baker Dracaena masoniana (Chahin.) Byng & Christenh. Dracaena ombet Heuglin ex Kotschy & Peyr. – Gabal Elba dragon tree Dracaena ovata Ker Gawl. Dracaena pearsonii (N.E.Br.) Byng & Christenh. Dracaena pethera Byng & Christenh. Dracaena pinguicula (P.R.O.Bally) Byng & Christenh. Dracaena reflexa Lam. – Pleomele dracaena or "Song of India" D. reflexa var. marginata (syn. D. marginata) – red-edged dracaena or Madagascar dragon tree Dracaena rockii (H.St.John) Jankalski Dracaena sanderiana Engl. – ribbon dracaena, marketed as "lucky bamboo" Dracaena serrulata Baker – Yemen dragon tree Dracaena singularis (N.E.Br.) Byng & Christenh. Dracaena spathulata Byng & Christenh. Dracaena steudneri Engl. Dracaena stuckyi (God.-Leb.) Byng & Christenh. Dracaena suffruticosa (N.E.Br.) Byng & Christenh. Dracaena surculosa Lindl. – spotted or gold dust dracaena. Formerly D. godseffiana Dracaena tamaranae Marrero Rodr., R.S.Almeira & M.Gonzáles-Martin Dracaena transvaalensis Baker Dracaena trifasciata (Prain) Mabb. Dracaena umbraculifera Jacq. Dracaena viridiflora Engl. & K.Krause Dracaena zeylanica (L.) Mabb. Formerly regarded as dracaena Asparagus asparagoides (L.) Druce (as D. medeoloides L.f.) Cordyline australis (G.Forst.) Endl. (as D. australis G.Forst.) Cordyline fruticosa (L.) A.Chev. (as D. terminalis Lam.) Cordyline indivisa (G.Forst.) Steud. (as D. indivisa G.Forst.) Cordyline obtecta (Graham) Baker (as D. obtecta Graham) Cordyline stricta (Sims) Endl. (as D. stricta Sims) Dianella ensifolia (L.) DC. (as D. ensifolia L.) Liriope graminifolia (L.) Baker (as D. graminifolia L.) Lomandra filiformis (Thunb.) Britten (as D. filiformis Thunb.) Uses Ornamental Some shrubby species, such as D. fragrans, D. surculosa, D. marginata, and D. sanderiana, are popular as houseplants. Many of these are toxic to pets, though not humans, according to the ASPCA among others. Rooted stem cuttings of D. sanderiana are sold as "lucky bamboo", although only superficially resembling true bamboos. Dracaena houseplants like humidity and moderate watering. They can tolerate periods of drought but the tips of the leaves may turn brown. Leaves at the base will naturally yellow and drop off, leaving growth at the top and a bare stem. Dracaena are vulnerable to mealybugs and scale insects. Other A naturally occurring bright red resin, dragon's blood, is collected from D. draco and, in ancient times, from D. cinnabari. Modern dragon's blood is however more likely to be from the unrelated Calamus rattan palms, formerly placed in Daemonorops. It also has social functions in marking graves, sacred sites, and farm plots in many African societies. Gallery
Biology and health sciences
Asparagales
Plants
549472
https://en.wikipedia.org/wiki/Sea%20slug
Sea slug
Sea slug is a common name for some marine invertebrates with varying levels of resemblance to terrestrial slugs. Most creatures known as sea slugs are gastropods, i.e. they are sea snails (marine gastropod mollusks) that, over evolutionary time, have either entirely lost their shells or have seemingly lost their shells due to having a significantly reduced or internal shell. The name "sea slug" is often applied to nudibranchs and a paraphyletic set of other marine gastropods without apparent shells. Sea slugs have an enormous variation in body shape, color, and size. Most are partially translucent. The often bright colors of reef-dwelling species imply that these animals are under constant threat of predators. Still, the color can warn other animals of the sea slug's toxic stinging cells (nematocysts) or offensive taste. Like all gastropods, they have small, razor-sharp teeth called radulas. Most sea slugs have a pair of rhinophores—sensory tentacles used primarily for the sense of smell—on their head, with a small eye at the base of each rhinophore. Many have feathery structures (cerata) on the back, often in a contrasting color, which act as gills. All species of genuine sea slugs have a selected prey animal on which they depend for food, including certain jellyfish, bryozoans, sea anemones, plankton, and other species of sea slugs. Sea slugs have brains. For example, Aplysia californica has a brain of about 20,000 nerve cells. Shell-less marine gastropods The name "sea slug" is often applied to numerous different evolutionary lineages of marine gastropod molluscs or sea snails, specifically those gastropods that are either not conchiferous (shell-bearing) or appear not to be. In evolutionary terms, losing the shell altogether, having a small internal shell, or having a shell so small that the soft parts of the animal cannot retract into it, are all features that have evolved many times independently within the class Gastropoda, on land and in the sea; these features often cause a gastropod to be labeled with the common name "slug". Nudibranchs (clade Nudibranchia) are a large group of marine gastropods that have no shell at all. These may be the most familiar sort of sea slug. Although most nudibranchs are not large, they are often very eye-catching because so many species have brilliant coloration. In addition to nudibranchs, a number of other taxa of marine gastropods (some easily mistaken for nudibranchs) are also often called "sea slugs". Gastropod groups Within the various groups of gastropods that are called "sea slugs", numerous families are within the informal taxonomic group Opisthobranchia. The term "sea slug" is perhaps most often applied to nudibranchs, many of which are brightly patterned and conspicuously ornate. The name "sea slug" is also often applied to the sacoglossans (clade Sacoglossa), the so-called sap-sucking or solar-powered sea slugs which are frequently a shade of green. Another group of main gastropods that are often labeled as "sea slugs" are the various families of headshield slugs and bubble snails within the clade Cephalaspidea. The sea hares, clade Aplysiomorpha, have a small, flat, proteinaceous internal shell. The clades Thecosomata and Gymnosomata are small pelagic gastropods known as "sea butterflies" and "sea angels". Many species of sea butterflies retain their shells. These are commonly known as "pteropods" but are also sometimes called sea slugs; especially the Gymnosomata, which have no shell as adults. There is also one group of "sea slugs" within the informal group Pulmonata. One very unusual group of marine gastropods that are shell-less are the pulmonate (air-breathing) species in the family Onchidiidae, within the clade Systellommatophora. Diversity in sea slugs Like many nudibranchs, Glaucus atlanticus can store and use stinging cells, or nematocysts, from its prey (Portuguese man o' war) in its finger-like cerata. Other species, like the Pyjama slug Chromodoris quadricolor, may use their striking colors to advertise their foul chemical taste. The lettuce sea slug (Elysia crispata) has lettuce-like ruffles that line its body. This slug, like other Sacoglossa, uses kleptoplasty, a process in which the slug absorbs chloroplasts from the algae it eats, and uses "stolen" cells to photosynthesize sugars. The ruffles of the lettuce sea slug increase the slug's surface area, allowing the cells to absorb more light. Headshield slugs, like the Chelidonura varians, use their shovel-shaped heads to dig into the sand, where they spend most of their time. The shield also protects sand from entering the mantle during burrowing. Peronia indica is a species of air-breathing sea slug, a shell-less marine pulmonate gastropod mollusk in the family Onchidiidae. The largest species of sea hare, the California black sea hare, Aplysia vaccaria can reach a length of 75 centimetres (30 in) and a weight of 14 kilograms (31 lb). Most sea hares have several defenses; in addition to being naturally toxic, they can eject a foul ink or secrete a viscous slime to deter predators. Some species of acochlidian sea slugs have made evolutionary transitions to living in freshwater streams and there is at least one evolutionary transition to land.
Biology and health sciences
Gastropods
Animals
549991
https://en.wikipedia.org/wiki/Experimental%20aircraft
Experimental aircraft
An experimental aircraft is an aircraft intended for testing new aerospace technologies and design concepts. The term research aircraft or testbed aircraft, by contrast, generally denotes aircraft modified to perform scientific studies, such as weather research or geophysical surveying, similar to a research vessel. The term "experimental aircraft" also has specific legal meaning in Australia, the United States and some other countries; usually used to refer to aircraft flown with an experimental certificate. In the United States, this also includes most homebuilt aircraft, many of which are based on conventional designs and hence are experimental only in name because of certain restrictions in operation.
Technology
Military aviation
null