id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
2,616,739
https://en.wikipedia.org/wiki/Particle%20zoo
In particle physics, the term particle zoo is used colloquially to describe the relatively extensive list of known subatomic particles by comparison to the variety of species in a zoo. In the history of particle physics, the topic of particles was considered to be particularly confusing in the late 1960s. Before the discovery of quarks, hundreds of strongly interacting particles (hadrons) were known and believed to be distinct elementary particles. It was later discovered that they were not elementary particles, but rather composites of quarks. The set of particles believed today to be elementary is known as the Standard Model and includes quarks, bosons and leptons. The term "subnuclear zoo" was coined or popularized by Robert Oppenheimer in 1956 at the VI Rochester International Conference on High Energy Physics. See also Eightfold way (physics) List of mesons List of baryons List of particles References Further reading A Tour of the Subatomic Zoo: A Guide to Particle Physics. By Cindy Schwarz. Taylor & Francis US, 1997 Raymond A. Serway, Clement J. Moses, Curt A. Moyer. Modern Physics. Cengage Learning, 2005. Particle physics
Particle zoo
Physics
241
39,575,312
https://en.wikipedia.org/wiki/Affine%20gauge%20theory
Affine gauge theory is classical gauge theory where gauge fields are affine connections on the tangent bundle over a smooth manifold . For instance, these are gauge theory of dislocations in continuous media when , the generalization of metric-affine gravitation theory when is a world manifold and, in particular, gauge theory of the fifth force. Affine tangent bundle Being a vector bundle, the tangent bundle of an -dimensional manifold admits a natural structure of an affine bundle , called the affine tangent bundle, possessing bundle atlases with affine transition functions. It is associated to a principal bundle of affine frames in tangent space over , whose structure group is a general affine group . The tangent bundle is associated to a principal linear frame bundle , whose structure group is a general linear group . This is a subgroup of so that the latter is a semidirect product of and a group of translations. There is the canonical imbedding of to onto a reduced principal subbundle which corresponds to the canonical structure of a vector bundle as the affine one. Given linear bundle coordinates on the tangent bundle , the affine tangent bundle can be provided with affine bundle coordinates and, in particular, with the linear coordinates (1). Affine gauge fields The affine tangent bundle admits an affine connection which is associated to a principal connection on an affine frame bundle . In affine gauge theory, it is treated as an affine gauge field. Given the linear bundle coordinates (1) on , an affine connection is represented by a connection tangent-valued form This affine connection defines a unique linear connection on , which is associated to a principal connection on . Conversely, every linear connection (4) on is extended to the affine one on which is given by the same expression (4) as with respect to the bundle coordinates (1) on , but it takes a form relative to the affine coordinates (2). Then any affine connection (3) on is represented by a sum of the extended linear connection and a basic soldering form on , where due to the canonical isomorphism of the vertical tangent bundle of . Relative to the linear coordinates (1), the sum (5) is brought into a sum of a linear connection and the soldering form (6). In this case, the soldering form (6) often is treated as a translation gauge field, though it is not a connection. Let us note that a true translation gauge field (i.e., an affine connection which yields a flat linear connection on ) is well defined only on a parallelizable manifold . Gauge theory of dislocations In field theory, one meets a problem of physical interpretation of translation gauge fields because there are no fields subject to gauge translations . At the same time, one observes such a field in gauge theory of dislocations in continuous media because, in the presence of dislocations, displacement vectors , , of small deformations are determined only with accuracy to gauge translations . In this case, let , and let an affine connection take a form with respect to the affine bundle coordinates (2). This is a translation gauge field whose coefficients describe plastic distortion, covariant derivatives coincide with elastic distortion, and a strength is a dislocation density. Equations of gauge theory of dislocations are derived from a gauge invariant Lagrangian density where and are the Lamé parameters of isotropic media. These equations however are not independent since a displacement field can be removed by gauge translations and, thereby, it fails to be a dynamic variable. Gauge theory of the fifth force In gauge gravitation theory on a world manifold , one can consider an affine, but not linear connection on the tangent bundle of . Given bundle coordinates (1) on , it takes the form (3) where the linear connection (4) and the basic soldering form (6) are considered as independent variables. As was mentioned above, the soldering form (6) often is treated as a translation gauge field, though it is not a connection. On another side, one mistakenly identifies with a tetrad field. However, these are different mathematical object because a soldering form is a section of the tensor bundle , whereas a tetrad field is a local section of a Lorentz reduced subbundle of a frame bundle . In the spirit of the above-mentioned gauge theory of dislocations, it has been suggested that a soldering field can describe sui generi deformations of a world manifold which are given by a bundle morphism where is a tautological one-form. Then one considers metric-affine gravitation theory on a deformed world manifold as that with a deformed pseudo-Riemannian metric when a Lagrangian of a soldering field takes a form , where is the Levi-Civita symbol, and is the torsion of a linear connection with respect to a soldering form . In particular, let us consider this gauge model in the case of small gravitational and soldering fields whose matter source is a point mass. Then one comes to a modified Newtonian potential of the fifth force type. See also Connection (affine bundle) Dislocations Fifth force Gauge gravitation theory Metric-affine gravitation theory Classical unified field theories References A. Kadic, D. Edelen, A Gauge Theory of Dislocations and Disclinations, Lecture Notes in Physics 174 (Springer, New York, 1983), G. Sardanashvily, O. Zakharov, Gauge Gravitation Theory (World Scientific, Singapore, 1992), C. Malyshev, The dislocation stress functions from the double curl T(3)-gauge equations: Linearity and look beyond, Annals of Physics 286 (2000) 249. External links G. Sardanashvily, Gravity as a Higgs field. III. Nongravitational deviations of gravitational field, . Gauge theories Theories of gravity
Affine gauge theory
Physics
1,230
67,989,719
https://en.wikipedia.org/wiki/Crohns%20MAP%20Vaccine
The Crohns MAP Vaccine is an experimental Viral vector vaccine intended to prevent or treat Crohn's disease, by provoking an immune response to one possible causative agent of the disease, Mycobacterium avium subsp. paratuberculosis. The vaccine is currently about to begin Phase 2 of its development. One of the scientists involved with this research is Thomas Borody, known for his work in developing the 'Triple Therapy' for treating ulcers caused by Helicobacter pylori. References Vaccines
Crohns MAP Vaccine
Biology
115
2,880,043
https://en.wikipedia.org/wiki/Rho%20Arietis
The Bayer designation Rho Arietis (ρ Arietis, abbreviated to ρ Ari) is shared by three stars in the constellation of Aries: ρ¹ Arietis (44 Arietis), an A-type main sequence star. ρ² Arietis (45 Arietis), an M-type giant star. ρ³ Arietis (46 Arietis), an F-type main sequence star. In some instances, ρ³ Arietis is called simply ρ Arietis. Arietis, Rho Aries (constellation)
Rho Arietis
Astronomy
120
12,192,441
https://en.wikipedia.org/wiki/Toilet%20rim%20block
A toilet rim block is a substance in the shape of a block that is used in flush toilets, which slowly dissolves in water. The blocks usually come in a small holder that is attached over the rim of a toilet and hangs down into the bowl, so as the toilet gets flushed, the water passes through the holder coming into contact with the block. With "liquid rims", however, liquid is held in a small bottle above—and connected to—the holder that slowly releases into the bottom of the holder (which is beneath the toilet rim), and so coming into contact with the water when the toilet is flushed. However, the blocks also come loose, for placement directly in-cistern (and therefore usable with squat toilets that lack the same sort of rim), although these tend to be slightly different in composition, so as to dissolve slower, due to the constant contact with water. These may also contain a colorant (typically blue or green) which shows up in the pan or bowl water. Composition and action Toilet rim blocks are marketed as disinfectants and deodorizers, while allegedly also helping to prevent the buildup of limescale in the toilet bowl. The composition of toilet blocks can vary, but they may contain (among other components): borax (an ingredient of many detergents), hydroxyethylcellulose (a gelling agent), troclosene sodium (a disinfectant), sodium dodecylbenzenesulfonate (a surfactant), sodium percarbonate (a form of oxygen bleach), sodium carbonate ("washing soda"), and various perfumes like, e.g., limonene, butylphenyl methylpropional, and linalool. As in the closely related urinal deodorizer blocks, some of the ingredients have irritating effects when applied to skin, eyes, or when swallowed. Their ecotoxicity is rated to be harmful to aquatic organisms and these chemicals may have long-term adverse effects for an aquatic environment. See also Household chemicals Ecological footprint List of health articles References Hygiene Cleaning products Toilets
Toilet rim block
Chemistry,Biology
446
7,399,069
https://en.wikipedia.org/wiki/3-Ethylpentane
3-Ethylpentane (C7H16) is a branched saturated hydrocarbon. It is an alkane, and one of the many structural isomers of heptane, consisting of a five carbon chain with a two carbon branch at the middle carbon. An example of an alcohol derived from 3-ethylpentane is the tertiary alcohol 3-ethylpentan-3-ol. References Alkanes Ethyl compounds
3-Ethylpentane
Chemistry
94
39,641,481
https://en.wikipedia.org/wiki/Mapcode
The mapcode system is an open-source geocode system consisting of two groups of letters and digits, separated by a dot. It represents a location on the surface of the Earth, within the context of a separately specified country or territory. For example, the entrance to the elevator of the Eiffel Tower in Paris is “France 4J.Q2”. As with postal addresses, it is often unnecessary to explicitly mention the country. The mapcode algorithm defines how a WGS 84 coordinate (a latitude and longitude) can be converted into a mapcode, and vice versa. Mapcodes may be supported on an automotive navigation system. Design principles The mapcode system was designed specifically as a free, brand-less, international standard for representing any location on the surface of the Earth by a short, easy to recognize and remember “code”, usually consisting of between 4 and 7 letters and digits. The shortness is the key differentiating factor between mapcodes and other location references; more densely populated areas are designated with shorter (4 character) codes. The brevity of mapcodes was achieved through a combination of several ideas: Codes need only be accurate enough for human, everyday use. On the human scale, when you are within a few meters of a destination, you are "there". Shorter codes are possible within the context of a particular territory. For example: there are enough different combinations of 9 digits and letters to give every square meter on the surface of the Earth a different code. But to give every square meter within the Netherlands a unique code, only 6 digits and letters are required. Not all codes have to be the same length. Shorter codes are reserved for densely populated areas. The last idea, especially, yields very good results. For example, although every location within the Netherlands can be identified by a 6-letter mapcode, half of the Dutch population can be found in about 40 cities and densely populated areas that together comprise less than 6,000 square kilometers. By reserving 5-letter mapcodes for these areas, half of the population can thus be reached with 5 mapcode letters. Since human dwellings and businesses are usually the more relevant locations in daily human life, this means that the relevant locations in the Netherlands have 5-letter mapcodes more often than 6-letter mapcodes. In fact, a significant number of people live in the 100 square kilometers of very densely populated city centers of Amsterdam, Rotterdam, The Hague, Eindhoven and Utrecht, which are covered by 4-letter codes. The mapcode system thus defines a population-density-based code division for all (roughly 200) countries on Earth, all (roughly 100) overseas territories, and roughly 240 subdivisions (provinces, states, oblasts, etc.). With the exception of Antarctica and the international waters, few localities on the surface of the Earth require a mapcode longer than 7 letters. Note that mapcodes can in fact be made arbitrarily precise: at the cost of two extra characters, a mapcode is guaranteed to be less than 25 centimeters from the original coordinate. Every character added increases the accuracy further by a factor of 30. However, the mapcode documentation states that this defeats the key purpose of the mapcode system: to offer the simplest possible codes appropriate for public, every-day use. History The mapcode system was developed in 2001 by TomTom's Pieter Geelen and Harold Goddijn, soon after the GPS satellite signals were opened up for civilian use. It was decided to open source the system using Apache License 2.0 in 2008. The algorithms and data tables are maintained by the Mapcode Foundation, which provides source code and specifications free of charge to any organization that wants to support mapcodes. The mapcode website notes that the term "Mapcode" is a trademark and that the algorithm is patented, both to prevent "misuse" (defined as producing an incompatible derivative system). As the Apache License provides a patent grant clause, making use of the algorithm via open-sourced code will remain unencumbered as long as all patents are held by the Mapcode Foundation or an associated entity. Mapcode was proposed as an international standard (ISO/TC 221 N4037) in 2015. The term "mapcode" was also used by Denso in Japan. The international mapcode system operated by the Mapcode Foundation is in no way linked to Denso or based on the Denso system. Attempting to establish a convenient standard of location, HERE supported Mapcode after its president joined the Mapcode board in 2015, exposing mapcodes for each location. Support diminished in HERE WeGo to processing mapcodes in search input alone until, finally in the early 2020s, no coordinate system formats other than latitude/longitude were supported. TomTom's automotive navigation applications can utilize mapcodes which associate with nearby street addresses, and returns the locations of the nearby street addresses. See also References External links Mapcodes - a solution for connecting the unaddressed in South Africa Mapcodes - a new standard for representing locations (India) Geographic coordinate systems
Mapcode
Mathematics
1,030
3,313,334
https://en.wikipedia.org/wiki/Cloud%20point
In liquids, the cloud point is the temperature below which a transparent solution undergoes either a liquid-liquid phase separation to form an emulsion or a liquid-solid phase transition to form either a stable sol or a suspension that settles as precipitate. The cloud point is analogous to the 'dew point' at which a gas-liquid phase transition called condensation occurs in water vapour (humid air) to form liquid water (dew or clouds). When the temperature is below 0 °C, the dew point is called the frost point, as water vapour undergoes gas-solid phase transition called deposition, solidification, or freezing. In the petroleum industry, cloud point refers to the temperature below which paraffin wax in diesel or biowax in biodiesels forms a cloudy appearance. The presence of solidified waxes thickens the oil and clogs fuel filters and injectors in engines. The wax also accumulates on cold surfaces (producing, for example, pipeline or heat exchanger fouling) and forms an emulsion or sol with water. Therefore, cloud point indicates the tendency of the oil to plug filters or small orifices at cold operating temperatures. An everyday example of cloud point can be seen in olive oil stored in cold weather. Olive oil begins to solidify (via liquid-solid phase separation) at around 4 °C, whereas winter temperatures in temperate countries can often be colder than 0 °C. In these conditions, olive oil begins to develop white, waxy clumps/spheres of solidified oil that sink to the bottom of the container. In crude or heavy oils, cloud point is synonymous with wax appearance temperature (WAT) and wax precipitation temperature (WPT). The cloud point of a nonionic surfactant or glycol solution is the temperature at which the mixture starts to phase-separate, and two phases appear, thus becoming cloudy. This behavior is characteristic of non-ionic surfactants containing polyoxyethylene chains, which exhibit reverse solubility versus temperature behavior in water and therefore "cloud out" at some point as the temperature is raised. Glycols demonstrating this behavior are known as "cloud-point glycols" and are used as shale inhibitors. The cloud point is affected by salinity, being generally lower in more saline fluids. Measuring cloud point of petroleum products Manual method The test oil is required to be transparent in layers 40 mm in thickness (in accordance with ASTM D2500). The wax crystals typically first form at the lower circumferential wall with the appearance of a whitish or milky cloud. The cloud point is the temperature just above where these crystals first appear. The test sample is first poured into a test jar to a level approximately half full. A cork carrying the test thermometer is used to close the jar. The thermometer bulb is positioned to rest at the bottom of the jar. The entire test subject is then placed in a constant temperature cooling bath on top of a gasket to prevent excessive cooling. At every 1 °C, the sample is taken out and inspected for cloud then quickly replaced. Successively lower temperature cooling baths may be used depending on the cloud point. Lower temperature cooling bath must have temperature stability not less than 1.5 K for this test. Automatic method ASTM D5773, Standard Test Method of Cloud Point of Petroleum Products (Constant Cooling Rate Method) is an alternative to the manual test procedure. It uses automatic apparatus and has been found to be equivalent to test method D2500. The D5773 test method determines the cloud point in a shorter period of time than manual method D2500. Less operator time is required to run the test using this automatic method. Additionally, no external chiller bath or refrigeration unit is needed. D5773 is capable of determining cloud point within a temperature range of -60 °C to +49 °C. Results are reported with a temperature resolution of 0.1 °C. Under ASTM D5773, the test sample is cooled by a Peltier device at a constant rate of 1.5 +/- 0.1 °C/min. During this period, the sample is continuously illuminated by a light source. An array of optical detectors continuously monitor the sample for the first appearance of a cloud of wax crystals. The temperature at which the first appearance of wax crystals is detected in the sample is determined to be the cloud point. See also Cold filter plugging point Gel point Krafft point – visually similar but specific to solutions of surfactants Petroleum Pour point References External links Phase Technology Manufacturer of ASTM D5773 automatic cloud point analyzers Chemical properties Edible oil chemistry
Cloud point
Chemistry
956
58,624,238
https://en.wikipedia.org/wiki/NGC%201898
NGC 1898 is a globular cluster in the constellation of Dorado at an approximate distance of 170,000 light-years. NGC 1898 is located in the Large Magellanic Cloud, a satellite galaxy of the Milky Way, and was for some time believed to be discovered by John Herschel in 1834; however recent research shows it was first observed by James Dunlop in 1826. References External links 1898 Dorado Globular clusters Large Magellanic Cloud
NGC 1898
Astronomy
93
1,059,742
https://en.wikipedia.org/wiki/Fire%20brick
A fire brick, firebrick, fireclay brick, or refractory brick is a block of ceramic material used in lining furnaces, kilns, fireboxes, and fireplaces. A refractory brick is built primarily to withstand high temperature, but will also usually have a low thermal conductivity for greater energy efficiency. Usually dense fire bricks are used in applications with extreme mechanical, chemical, or thermal stresses, such as the inside of a wood-fired kiln or a furnace, which is subject to abrasion from wood, fluxing from ash or slag, and high temperatures. In other, less harsh situations, such as in an electric or natural gas fired kiln, more porous bricks, commonly known as "kiln bricks", are a better choice. They are weaker, but they are much lighter and easier to form and insulate far better than dense bricks. In any case, firebricks should not spall, and their strength should hold up well during rapid temperature changes. Manufacture In the making of firebrick, fire clay is fired in the kiln until it is partly vitrified. For special purposes, the brick may also be glazed. There are two standard sizes of fire brick: and . Also available are firebrick "splits" which are half the thickness and are often used to line wood stoves and fireplace inserts. The dimensions of a split are usually . Fire brick was first invented in 1822 by William Weston Young in the Neath Valley of Wales. High temperature applications The silica fire bricks that line steel-making furnaces are used at temperatures up to , which would melt many other types of ceramic, and in fact part of the silica firebrick liquefies. High-temperature Reusable Surface Insulation (HRSI), a material with the same composition, was used in the insulating tiles of the Space Shuttle. Non-ferrous metallurgical processes use basic refractory bricks because the slags used in these processes readily dissolve the "acidic" silica bricks. The most common basic refractory bricks used in smelting non-ferrous metal concentrates are "chrome-magnesite" or "magnesite-chrome" bricks (depending on the relative ratios of magnesite and chromite ores used in their manufacture). Lower temperature applications A range of other materials find use as firebricks for lower temperature applications. Magnesium oxide is often used as a lining for furnaces. Silica bricks are the most common type of bricks used for the inner lining of furnaces and incinerators. As the inner lining is usually of sacrificial nature, fire bricks of higher alumina content may be employed to lengthen the duration between re-linings. Very often cracks can be seen in this sacrificial inner lining shortly after being put into operation. They revealed more expansion joints should have been put in the first place, but these now become expansion joints themselves and are of no concern as long as structural integrity is not affected. Silicon carbide, with high abrasive strength, is a popular material for hearths of incinerators and cremators. Common red clay brick may be used for chimneys and wood-fired ovens. Potential use to store energy Firebricks, with their ability to withstand high temperatures and store heat, offer a promising solution for storing energy. These refractory bricks can be used to store industrial process heat, leveraging excess renewable electricity to create a low-cost, continuous heat source for industry. Due to their construction from common materials, firebrick storage systems are much more cost-effective than battery systems for thermal energy storage. Research across 149 countries indicates that using firebricks for heat storage can significantly reduce the need for electricity generation, battery storage, hydrogen production, and low-temperature heat storage. This approach could lower overall energy costs by about 1.8%, making firebricks a valuable tool in reducing the costs of transitioning to 100% clean, renewable energy. See also Harbison-Walker Refractories Company Equivalent VIII Niles Firebrick References Further reading Bricks Refractory materials Silicates
Fire brick
Physics
856
6,050,376
https://en.wikipedia.org/wiki/Metz%20%28company%29
Metz-Werke GmbH & Co. KG was a German consumer electronic manufacturer, Besides Loewe and TechniSat, Metz was the only remaining TV manufacturer which developed and produced their devices in Germany. Its head office is in Zirndorf, Bavaria. The company filed for insolvency in 2014 and backed up by new investors now reformed as two independent companies Metz Consumer Electronics GmbH and Metz mecatech GmbH since 2015. History 28. November 1938: Paul Metz founds the company. 1939: Manufacture of electronic devices for Carl Zeiss. Development of the product range through the manufacture of products with high-frequency technology. until 1945: Production of radio technology for short-wave transmitters and receivers. 1947: Consumer electronics division established with the manufacture of the first Metz radios. 1950: Complete range of radio devices from the smallest Super radio to the radio gramophone. In keeping with the motto, "Metz - always 1st class", everything was done to ensure “the highest quality of sound and reception”. Construction of an electro-acoustic laboratory and another special laboratory. 1952: Another division is established: manufacture of flash units commissioned by Agfa and Carl-Braun begins. Amateur flash photography is revolutionised. Gold medal for sound, design and performance at an international exhibition in Luxembourg, certificate of honour and gold medal at an international exhibition in Thessaloniki. Manufacture of Metz “mecablitz” flash units begins. 1953: Spectacular presentation of 8 table and stand units at the “Great German Radio, Phonograph and Television Exhibition” in Düsseldorf. 1954: Worldwide sensation - production of the “Babyphon” (battery powered radio-phonograph combination), the first portable radio with a record player. 1955: TV manufacture begins. Metz presents the first television set with 3D surround sound system at the radio and television exhibition in Düsseldorf. 1957: The third division is established with the opening of a factory in Zirndorf making sound equipment storage units. Metz brings the first electron transistor flash unit in the world to the market “mecablitz 100”. 1958: First hi-fi systems from Metz: stereo hi-fi system "410" with 2 channel amplifier, stereo hi-fi music cabinet “705” with stereo auto-changer. 1963: 25 years of Metz. 1966: A new TV set factory opens in Zirndorf. Metz brings cassette players to the Hanover Trade Fair for the first time. 1967: Manufacture of colour TVs begins. 1972: Production of flash units reaches 2.5 million. 1978: “markt intern” names Metz as “No. 1 retail partner” for the first time. 1979: Inclusion of plastic in manufacturing. Development of the SCA adapter system for adapting cameras by different manufacturers for Metz flash units. 1982: Production of flash units reaches 5 million. 1987: Conversion to GmbH & Co. KG (German limited company). 1989: Product range is extended with the addition of the VHS and S-VHS camcorder and S-VHS video recorder. 1990: Metz starts using its own 100-Hertz technology. Metz receives performance index award for the first time: a survey by rf-Brief puts Metz at No. 1. 1991: Construction of a development centre in Zirndorf. 1993: Founder Paul Metz dies. The company continues under the leadership of his wife Helene Metz. 1994: Flash unit mecablitz 50 MZ-5 is introduced; for the first time, a Metz flash unit is controlled by 3 micro computers. 1995: Start of Metz's module concept - modular chassis construction allows TV sets to be individually adapted. Implementation of a computer-like slot system for the TV chassis – making it possible for uncomplicated and individual upgrading, conversion and retrofitting to be carried out on-site by Metz's technical partners in line with customer wishes. 1997: 50 years of Metz – consumer electronics. The Paul and Helene Metz Foundation is established. Presentation of the “varioline” individual TV range – in 4:3 and 16:9 versions. A variable slot system and individual colouring (64,000 colour variations) allows maximum adaptation to customer requirements. 1998: Anniversary: 60 years of Metz. As part of the photokina trade fair, Metz presents the world's smallest flash unit with high light output and the simplest 2-button operation. General conversion of TV manufacture to 100 hertz technology. 1999: Construction of a new service centre and relocation of the whole company to Zirndorf. At the IFA in Berlin, Metz presents new TV sets from its premium range, featuring extremely thin cathode ray tubes (4:3 and 16:9) as well as integrated room lighting + 2 DVD players + a hi-fi CD receiver. 2000: photokina: Metz also comes into its own in the digital sector - introducing the newest generation of flash units, with a simplified operating concept and exceptional special functions, as well as a generation of updateable SCA-3002 adapters. Start of Metz's digital module concept: - presentation of the first fully integrated DVB-S module for the novel digital SAT-TV in Spring: all TV sets made after April 1997 can be digitally retrofitted. Presentation of high-quality TV sets in the Astral/Spectral design range. A combination of high-quality material, anodized aluminium and recyclable plastics create an impressive whole. 2001: Extension of the plastics factory. IFA Berlin: Expansion of digital TV retrofit sets in the DVB-T and DVB-C section. Introduction of high-quality 42” plasma TV sets in 16:9 format. Extensive upgrading of the flash range in the digital camera division. 2002: Beginning of a new generation of chassis with intuitive operation and new, comfortable remote control. Expansion of the DVB module to include DVB-S and DVB-T with CI interfaces. 10 millionth flash unit. 2003: Fully integrated hard disk recorder with timeshift and background recording functions. 2004: World first: adaptive 28 CS-2 digital flash unit with immediate correction button. 2005: Metz's in-house design of “made in Germany” LCD-TV sets (HDTV ready) are introduced for the first time, together with the new Slim TV product range with visibly reduced CRT housing depth. 2006: Numerous awards for Metz LCD TVs; earns first place for both picture quality and user convenience. Talio: best selling LCD TV in specialist retailers. Introduction of the first flash unit with USB interface and innovative dual reflector technology. 2007: LCD TV range with HDTV reception. Globally unique retrofitting. 2008: Anniversary: 70 years of Metz. Wide range of LCD TVs with high definition 42” full HD panels, 100 Hz DMC technology and integrated hard disk recorder. An innovative world first – wireless macro flash with two individually controllable reflectors. 2009: Expansion of the LCD TV range with the addition of a 55” LCD Metz Sirius 32 HDTV 100R, which stands out as a reference model in the 32” segment. All Metz LCDs are equipped with 100 Hz or 200 Hz DMC and full HD technology. 2010: Metz is extending four of its LCD product families through the addition of energy-efficient LED technology. Metz is updating its system flash range with 25 new or revised models. The 24 AF-1 digital and the 44 AF-1 digital are being presented as the new high-performance classes. 2011: Metz presents the smart linking solution ‘Metz Media System’. The LCD TV range is being extended through the addition of two 3D-compatible product families. 19 November 2014: Metz filed for insolvency. January 2015: About 110 of the 540 employees will be laid off. March 2015: Two investors were found. The company will be split in two. The TV business is taken over by the Chinese electronics manufacturer Skyworth as Metz Consumer Electronics GmbH, whereas the plastics technology and flash business were bought by the local Daum Group (Germany) to firm Metz mecatech GmbH. 298 of the employees will be taken over. 2020: Metz filed for insolvency again. Products It also built electronic flashes for use with many brands of cameras. The flash devices use adaptors, the SCA system (Special Camera Adaption), to make them compatible with different brands of cameras. Metz is also well known for its high-end television sets. As of 2025, Metz appears to be out of the flash business. Gallery See also References External links Official Site Official site Electronics companies of Germany Defunct photography companies of Germany German brands Companies based in Bavaria Electronics companies established in 1938 Privately held companies of Germany 1938 establishments in Germany 2015 mergers and acquisitions Radio manufacturers
Metz (company)
Engineering
1,783
4,336,426
https://en.wikipedia.org/wiki/Torula
Torula (Cyberlindnera jadinii) is a species of yeast. Use Torula, in its inactive form (usually labeled as torula yeast), is widely used as a flavoring in processed foods and pet foods. It is often grown on wood liquor, a byproduct of paper production, which is rich in wood sugars (xylose). It is pasteurized and spray-dried to produce a fine, light grayish-brown powder with a slightly yeasty odor and gentle, slightly meaty taste. Cyberlindnera jadinii (which in these contexts is often still labelled with its synonym Candida utilis) can be used, in a blend of various other yeasts, as secondary cheese starter culture "... to inoculate pasteurised milk, which mimic the natural yeast flora of raw milk and improve cheese flavour. Other functions of the added yeast organisms are the neutralisation of the curd (lactate degradation) and galactose consumption." Like the flavor enhancer monosodium glutamate (MSG), torula is rich in glutamic acid. Therefore, it has become a popular replacement among manufacturers wishing to eliminate MSG or hide flavor enhancer usage in an ingredients list. It also enables the marketing of "all-natural" ingredients. Torula finds accepted use in Europe and California for the organic control of olive flies. When dissolved in water, it serves as a food attractant, with or without additional pheromone lures, in McPhail and OLIPE traps, which drown the insects. In field trials in Sonoma County, California, mass trappings reduced crop damage to an average of 30% compared to almost 90% in untreated controls. See also Nutritional yeast References Saccharomycetes Yeasts
Torula
Biology
376
1,390,368
https://en.wikipedia.org/wiki/Drive%20by%20wire
Drive by wire or DbW in the automotive industry is the technology that uses electronics or electro-mechanical systems in place of mechanical linkages to control driving functions. The concept is similar to fly-by-wire in the aviation industry. Drive-by-wire may refer to just the propulsion of the vehicle through electronic throttle control, or it may refer to electronic control over propulsion as well as steering and braking, which separately are known as steer by wire and brake by wire, along with electronic control over other vehicle driving functions. Driver input is traditionally transferred to the motor, wheels, and brakes through a mechanical linkage attached to controls such as a steering wheel, throttle pedal, hydraulic brake pedal, brake pull handle, and so on, which apply mechanical forces. In drive-by-wire systems, driver input does not directly adjust a mechanical linkage, instead the input is processed by an electronic control unit which controls the vehicle using electromechanical actuators. The human–machine interface, such as a steering wheel, yoke, accelerator pedal, brake pedal, and so on, may include haptic feedback that simulates the resistance of hydraulic and mechanical pedals and steering, including steering kickback. Components such as the steering column, intermediate shafts, pumps, hoses, belts, coolers, vacuum servos and master cylinders are eliminated from the vehicle. Safety standards for drive-by-wire are specified by the ISO 26262 standard level D. Properties Dispensing with mechanical linkages has several advantages: it reduces complexity and simplifies assembly; simplifies service and tuning; reduces the force required to engage inputs and allows it to be customized with haptic technology; allows for more interior design freedom in the placement of input mechanisms; allows for automation of driving functions; reduces cabin noise by eliminating the acoustic linkage to the drive systems; and by reducing floor openings it improves the crash behavior of the vehicle. Because driver inputs can be overridden, safety can be improved by providing computer controlled intervention of vehicle controls with systems such as electronic stability control (ESC), adaptive cruise control and lane assist systems. Each drive-by-wire system leads to more actuator in the vehicle and therefore greater energy consumption. For instance, the drive-by-wire technology adds actuator motors to create the torque needed to turn the wheels, and a feedback transducer to create the "road feel" on the steering wheel. Safety considerations require redundancy of driver input sensors, vehicle communication networks, actuators, and other systems. Automotive safety standards such as ISO 26262 require drive-by-wire fail-operational and fail-safe behaviors. Safety and security Failures in drive by wire systems can lead to potential hazardous situations where safety depends entirely on the vehicle's failure mode. The Aachen University Institute for Motor Vehicles (ika – Institut für Kraftfahrzeuge Aachen), in collaboration with Mercedes-AMG and others, studies the operation, risks, and safety mechanisms of drive-by-wire systems through its drive-by-wire concept vehicle, SpeedE. Studied scenarios include loss of control over acceleration, brakes, or steering. Early by-wire systems had mechanical backup systems in case the by-wire systems failed. The modern drive by wire paradigm dispenses with mechanical backups, and relies on redundancy, fail-operational systems, and other safety and security measures: computational redundancy through lockstep CPUs; functional redundancy through modular design where the failure of one module is compensated by an identical module, for example by torque vectoring to compensate for a failed steering or braking module; multi-sensor fault detection; self-isolation of damaged systems; and fault-tolerant communication. Such fail-safes are specified by the ISO 26262 standard level D. Assessment and standardization of drive-by-wire computer security has also taken place. Researchers demonstrated in 2011 and 2013 that some systems in commercially-available vehicles are susceptible to hacking, allowing for external control of the vehicle. Hacking demonstrations included remote activation of systems like the horn, windshield wipers, accelerator, brakes, and transmission. Modern standards such as the ISO/SAE 21434 standard and UNCE regulations 155, 156, and 157 require dedicated cryptographic modules that encrypt all communication between the ECUs and the drive system components. Systems Brake by wire A brake-by-wire system eliminates the need for a mechanical connection that transfers force between the brakes and a driver input apparatus such as a pedal or lever. The three main types of brake-by-wire systems are: electronic parking brakes which have, since the turn of the 21st century, become more common; electro-hydraulic brakes (EHB) which can be implemented alongside legacy hydraulic brakes and as of 2020 have found small-scale usage in the automotive industry; and electro-mechanical brakes (EMB) that use no hydraulic fluid, which as of 2020 have yet to be successfully introduced in production vehicles due to their novel actuation methods. Shift by wire Shift-by-wire employs electrical or electronic connections that replace the mechanical connection between the driver's gearshift mechanism and the transmission. Since becoming commercially available in 1996, shift-by-wire has been commonly used in automated manual transmission, and has later been implemented in semi-automatic transmission and automatic transmission. Park by wire may be considered a form of shift-by-wire. Not to be confused with park-brake by wire which engages a parking brake, park-by-wire engages the parking pawl. A parking pawl in a traditional automatic transmission has a mechanical link to the gear stick and locks the transmission in the park position when the gear-shift handle is set in "park". A park-by-wire system uses electronic commands sent to an actuator that engages the parking pawl. Steer by wire A vehicle equipped with a steer-by-wire system is able to steer some or all of its wheels without a steering column connected to the wheel axles. It is different from electric power steering or power-assist, as those systems still rely on the steering column to mechanically transfer some steering torque to the wheels. A vehicle with a steer-by-wire system may be manually controlled by a driver through a steering wheel, a yoke, or any other steering apparatus which is connected to one or more electronic control units, which uses the input to control steering actuators that turn the wheels and steer the vehicle. The steering wheel or yoke may be equipped with haptic feedback to simulate road feel and wheel resistance, and change depending on the vehicle speed or customizable settings. Throttle by wire Accelerate-by-wire or throttle-by-wire, more commonly known as electronic throttle control, is a system that actuates vehicle propulsion without any mechanical connections, such as cables, from the accelerator pedal to the throttle valve of the engine or other propulsion systems. In electric vehicles, this system controls the electric motors by sensing the accelerator pedal input and sending commands to the power inverter modules. References External links Fusion of redundant information in brake-by-wire systems, using a fuzzy Voter defaultsort Vehicle braking technologies Vehicle safety technologies Automotive steering technologies
Drive by wire
Engineering
1,470
22,822,116
https://en.wikipedia.org/wiki/PA%20degree
In the mathematical field of computability theory, a PA degree is a Turing degree that computes a complete extension of Peano arithmetic (Jockusch 1987). These degrees are closely related to fixed-point-free (DNR) functions, and have been thoroughly investigated in recursion theory. Background In recursion theory, denotes the computable function with index (program) e in some standard numbering of computable functions, and denotes the eth computable function using a set B of natural numbers as an oracle. A set A of natural numbers is Turing reducible to a set B if there is a computable function that, given an oracle for set B, computes the characteristic function χA of the set A. That is, there is an e such that . This relationship is denoted A ≤T B; the relation ≤T is a preorder. Two sets of natural numbers are Turing equivalent if each is Turing reducible to the other. The notation A ≡T B indicates A and B are Turing equivalent. The relation ≡T is an equivalence relation known as Turing equivalence. A Turing degree is a collection of sets of natural numbers, such that any two sets in the collection are Turing equivalent. Equivalently, a Turing degree is an equivalence class of the relation ≡T. The Turing degrees are partially ordered by Turing reducibility. The notation a ≤T b indicates there is a set in degree b that computes a set in degree a. Equivalently, a ≤T b holds if and only if every set in b computes every set in a. A function f from the natural numbers to the natural numbers is said to be diagonally nonrecursive (DNR) if, for all n, (here inequality holds by definition if is undefined). If the range of f is the set {0,1} then f is a DNR2 function. It is known that there are DNR functions that do not compute any DNR2 function. Completions of Peano arithmetic A completion of Peano arithmetic is a set of formulas in the language of Peano arithmetic, such that the set is consistent in first-order logic and such that, for each formula, either that formula or its negation is included in the set. Once a Gödel numbering of the formulas in the language of PA has been fixed, it is possible to identify completions of PA with sets of natural numbers, and thus to speak about the computability of these completions. A Turing degree is defined to be a PA degree if there is a set of natural numbers in the degree that computes a completion of Peano Arithmetic. (This is equivalent to the proposition that every set in the degree computes a completion of PA.) Because there are no computable completions of PA, the degree 0 consisting of the computable sets of natural numbers is not a PA degree. Because PA is an effective first-order theory, the completions of PA can be characterized as the infinite paths through a particular computable subtree of 2<ω. Thus the PA degrees are exactly the degrees that compute an infinite path through this tree. Properties The PA degrees are upward closed in the Turing degrees: if a is a PA degree and a ≤T b then b is a PA degree. The Turing degree 0‘, which is the degree of the halting problem, is a PA degree. There are also PA degrees that are not above 0‘. For example, the low basis theorem implies that there is a low PA degree. On the other hand, Antonín Kučera has proved that there is a degree less than 0‘ that computes a DNR function but is not a PA degree (Jockusch 1989:197). Carl Jockusch and Robert Soare (1972) proved that the PA degrees are exactly the degrees of DNR2 functions. By definition, a degree is PA if and only if it computes a path through the tree of completions of Peano arithmetic. A stronger property holds: a degree a is a PA degree if and only if a computes a path through every infinite computable subtree of 2<ω (Simpson 1977). Arslanov's completeness criterion M. M. Arslanov gave a characterisation of which c.e. sets are complete (i.e. Turing equivalent to ). For a c.e. set , if and only if computes a DNR function. In particular, every PA degree is DNR2 and hence DNR, so is the only c.e. PA degree. See also Basis theorem (computability) Kőnig's lemma References Carl Jockusch (1987), "Degrees of functions with no fixed points", Logic Colloquium '87, Fenstad, Frolov, and Hilpinen, eds., North-Holland, Carl Jockusch and Robert Soare (1972), "Π01 classes and degrees of theories", Transactions of the American Mathematical Society, v. 173, pp. 33–56. Stephen G. Simpson (1977), "Degrees of unsolvability: a survey of results", Handbook of Mathematical Logic, Barwise (ed.), North-Holland, pp. 631–652. Computability theory
PA degree
Mathematics
1,096
34,927,770
https://en.wikipedia.org/wiki/Convective%20mixing
In fluid dynamics, convective mixing is the vertical transport of a fluid and its properties. In many important ocean and atmospheric phenomena, convection is driven by density differences in the fluid, e.g. the sinking of cold, dense water in polar regions of the world's oceans; and the rising of warm, less-dense air during the formation of cumulonimbus clouds and hurricanes. See also Atmospheric convection Bénard cells Churchill–Bernstein equation Double diffusive convection Heat transfer Heat conduction Thermal radiation Heat pipe Laser-heated pedestal growth Nusselt number Thermomagnetic convection References Notes Further reading Convection
Convective mixing
Physics,Chemistry
129
68,511,809
https://en.wikipedia.org/wiki/Lutetium%28III%29%20nitrate
Lutetium(III) nitrate is an inorganic compound, a salt of lutetium and nitric acid with the chemical formula Lu(NO3)3. The compound forms colorless crystals, dissolves in water, and also forms crystalline hydrates. The compound is poisonous. Synthesis Dissolving lutetium oxide in nitric acid: To obtain anhydrous nitrate, the powdered metal is added to nitrogen dioxide dissolved in ethyl acetate: Physical properties Lutetium(III) nitrate forms colorless hygroscopic crystals. Soluble in water and ethanol. Forms crystalline hydrates of the composition Lu(NO3)3•nH2O, where n = 3, 4, 5, 6. Chemical properties The hydrated lutetium nitrate thermally decomposes to form LuONO3 and decomposes to lutetium oxide upon further heating. The compound forms ammonium hexafluoroluthenate with ammonium fluoride: Applications Lutetium(III) nitrate is used to obtain metallic lutetium and also as a chemical reagent. It is used as a component of materials for the production of laser crystals. References Lutetium compounds Nitrates
Lutetium(III) nitrate
Chemistry
249
80,849
https://en.wikipedia.org/wiki/Seashell
A seashell or sea shell, also known simply as a shell, is a hard, protective outer layer usually created by an animal or organism that lives in the sea. Most seashells are made by mollusks, such as snails, clams, and oysters to protect their soft insides. Empty seashells are often found washed up on beaches by beachcombers. The shells are empty because the animal has died and the soft parts have decomposed or been eaten by another organism. A seashell is usually the exoskeleton of an invertebrate (an animal without a backbone), and is typically composed of calcium carbonate or chitin. Most shells that are found on beaches are the shells of marine mollusks, partly because these shells are usually made of calcium carbonate, and endure better than shells made of chitin. Apart from mollusk shells, other shells that can be found on beaches are those of barnacles, horseshoe crabs and brachiopods. Marine annelid worms in the family Serpulidae create shells which are tubes made of calcium carbonate cemented onto other surfaces. The shells of sea urchins are called "tests", and the moulted shells of crabs and lobsters are exuviae. While most seashells are external, some cephalopods have internal shells. Seashells have been used by humans for many different purposes throughout history and prehistory. However, seashells are not the only kind of shells; in various habitats, there are shells from freshwater animals such as freshwater mussels and freshwater snails, and shells of land snails. Terminology When the word "seashells" refers only to the shells of marine mollusks, then studying seashells is part of conchology. Conchologists or serious collectors who have a scientific bias are in general careful not to disturb living populations and habitats: even though they may collect a few live animals, most responsible collectors do not often over-collect or otherwise disturb ecosystems. The study of mollusks (including their shells) is known as malacology; a person who studies mollusks is known as a malacologist. Occurrence Seashells are commonly found in beach drift, which is natural detritus deposited along strandlines on beaches by the waves and the tides. Shells are very often washed up onto a beach empty and clean, the animal having already died. Empty seashells are often picked up by beachcombers. However, the majority of seashells which are offered for sale commercially have been collected alive (often in bulk) and then killed and cleaned, specifically for the commercial trade. This type of large-scale exploitation can sometimes have a strong negative impact on local ecosystems, and sometimes can significantly reduce the distribution of rare species. Shell synthesis Seashells are created by the molluscs that use them for protection. Molluscs have an outside layer of tissues on their bodies the mantle which creates the shell material and which connects the shell to the mollusc. The specialized cells in the mantle form the shell using different minerals and proteins. The proteins are then used to create the framework that supports the growing shell. Calcium carbonate is the main compound of shell structure, aiding in adhesion. Molluscan seashells The word seashell is often used to mean only the shell of a marine mollusk. Marine mollusk shells that are familiar to beachcombers and thus most likely to be called "seashells" are the shells of marine species of bivalves (or clams), gastropods (or snails), scaphopods (or tusk shells), polyplacophorans (or chitons), and cephalopods (such as nautilus and spirula). These shells are very often the most commonly encountered, both in the wild, and for sale as decorative objects. Marine species of gastropods and bivalves are more numerous than land and freshwater species, and the shells are often larger and more robust. The shells of marine species also often have more sculpture and more color, although this is by no means always the case. In the tropical and sub-tropical areas of the planet, there are far more species of colorful, large, shallow water shelled marine mollusks than there are in the temperate zones and the regions closer to the poles. Although there are a number of species of shelled mollusks that are quite large, there are vast numbers of extremely small species too, see micromollusks. Not all mollusks are marine. There are numerous land and freshwater mollusks, see for example snail and freshwater bivalves. In addition, not all mollusks have an external shell: some mollusks such as some cephalopods (squid and octopuses) have an internal shell, and many mollusks have no shell, see for example slug and nudibranch. Bivalves Bivalves are often the most common seashells that wash up on large sandy beaches or in sheltered lagoons. They can sometimes be extremely numerous. Very often the two valves become separated. There are more than 15,000 species of bivalves that live in both marine and freshwater. Examples of bivalves are clams, scallops, mussels, and oysters. The majority of bivalves consist of two identical shells that are held together by a flexible hinge. The animal's body is held protectively inside these two shells. Bivalves that do not have two shells either have one shell or they lack a shell altogether. The shells are made of calcium carbonate and are formed in layers by secretions from the mantle. Bivalves, also known as pelecypods, are mostly filter feeders; through their gills, they draw in water, in which is trapped tiny food particles. Some bivalves have eyes and an open circulatory system. Bivalves are used all over the world as food and as a source of pearls. The larvae of some freshwater mussels can be dangerous to fish and can bore through wood. Shell Beach, Western Australia, is a beach which is entirely made up of the shells of the cockle Fragum erugatum. Gastropods Certain species of gastropod seashells (the shells of sea snails) can sometimes be common, washed up on sandy beaches, and also on beaches that are surrounded by rocky marine habitat. Polyplacophorans Chiton plates or valves often wash up on beaches in rocky areas where chitons are common. Chiton shells, which are composed of eight separate plates and a girdle, usually come apart not long after death, so they are almost always found as disarticulated plates. Plates from larger species of chitons are sometimes known as "butterfly shells" because of their shape. Cephalopods Only a few species of cephalopods have shells (either internal or external) that are sometimes found washed up on beaches. Some cephalopods such as Sepia, the cuttlefish, have a large internal shell, the cuttlefish bone, and this often washes up on beaches in parts of the world where cuttlefish are common. Spirula spirula is a deep water squid-like cephalopod. It has an internal shell which is small (about 1 in or 24 mm) but very light and buoyant. This chambered shell floats very well and therefore washes up easily and is familiar to beachcombers in the tropics. Nautilus is the only genus of cephalopod that has a well-developed external shell. Females of the cephalopod genus Argonauta create a papery egg case which sometimes washes up on tropical beaches and is referred to as a "paper nautilus". The largest group of shelled cephalopods, the ammonites, are extinct, but their shells are very common in certain areas as fossils. Molluscan seashells used by other animals Empty molluscan seashells are a sturdy, and usually readily available, "free" resource which is often easily found on beaches, in the intertidal zone, and in the shallow subtidal zone. As such they are sometimes used second-hand by animals other than humans for various purposes, including for protection (as in hermit crabs) and for construction. Mollusks Carrier shells in the family Xenophoridae are marine shelled gastropods, fairly large sea snails. Most species of xenophorids cement a series of objects to the rim of their shells as they grow. These objects are sometimes small pebbles or other hard detritus. Very often shells of bivalves or smaller gastropods are used, depending on what is available on the particular substrate where the snail itself lives. It is not clear whether these shell attachments serve as camouflage, or whether they are intended to help prevent the shell sinking into a soft substrate. Small octopuses sometimes use an empty shell as a sort of cave to hide in, or hold seashells around themselves as a form of protection like a temporary fortress. Invertebrates Almost all genera of hermit crabs use or "wear" empty marine gastropod shells throughout their lifespan, in order to protect their soft abdomens, and in order to have a strong shell to withdraw into if attacked by a predator. Each individual hermit crab is forced to find another gastropod shell on a regular basis, whenever it grows too large for the one it is currently using. Some hermit crab species live on land and may be found quite some distance from the sea, including those in the tropical genus Coenobita. Conchology There are numerous popular books and field guides on the subject of shell-collecting. Although there are a number of books about land and freshwater mollusks, the majority of popular books emphasize, or focus exclusively on, the shells of marine mollusks. Both the science of studying mollusk shells and the hobby of collecting and classifying them are known as conchology. The line between professionals and amateur enthusiasts is often not well defined in this subject, because many amateurs have contributed to, and continue to contribute to, conchology and the larger science of malacology. Many shell collectors belong to "shell clubs" where they can meet others who share their interests. A large number of amateurs collect the shells of marine mollusks, and this is partly because many shells wash up empty on beaches, or live in the intertidal or sub-tidal zones, and are therefore easily found and preserved without much in the way of specialized equipment or expensive supplies. Some shell collectors find their own material and keep careful records, or buy only "specimen shells", which means shells which have full collecting data: information including how, when, where, in what habitat, and by whom, the shells were collected. On the other hand, some collectors buy the more widely available commercially imported exotic shells, the majority of which have very little data, or none at all. To museum scientists, having full collecting data (when, where, and by whom it was collected) with a specimen is far more important than having the shell correctly identified. Some owners of shell collections hope to be able to donate their collection to a major natural history or zoology museum at some point, however, shells with little or no collecting data are usually of no value to science, and are likely not to be accepted by a major museum. Apart from any damage to the shell that may have happened before it was collected, shells can also suffer damage when they are stored or displayed. For an example of one rather serious kind of damage see Byne's disease. Shell clubs There are a number of clubs or societies which consist of people who are united by a shared interest in shells. In the US, these clubs are more common in southerly coastal areas, such as Florida and California, where the marine fauna is rich in species. Identification Seashells are usually identified by consulting general or regional shell-collecting field guides, and specific scientific books on different taxa of shell-bearing mollusks (monographs) or "iconographies" (limited text – mainly photographs or other illustrations). (For a few titles on this subject in the US, see the list of books at the foot of this article.) Identifications to the species level are generally achieved by examining illustrations and written descriptions, rather than by the use of Identification keys, as is often the case in identifying plants and other phyla of invertebrates. The construction of functional keys for the identification of the shells of marine mollusks to the species level can be very difficult, because of the great variability within many species and families. The identification of certain individual species is often very difficult, even for a specialist in that particular family. Some species cannot be differentiated on the basis of shell character alone. Numerous smaller and more obscure mollusk species (see micromollusk) are yet to be discovered and named. In other words, they have not yet been differentiated from similar species and assigned scientific (binomial) names in articles in journals recognized by the International Commission on Zoological Nomenclature (ICZN). Large numbers of new species are published in the scientific literature each year. There are currently an estimated 100,000 species of mollusks worldwide. Non-marine "seashells" The term seashell is also applied loosely to mollusk shells that are not of marine origin, for example by people walking the shores of lakes and rivers using the term for the freshwater mollusk shells they encounter. Seashells purchased from tourist shops or dealers may include various freshwater and terrestrial shells as well. Non-marine items offered may include large and colorful tropical land snail shells, freshwater apple snail shells, and pearly freshwater unionid mussel shells. This can be confusing to collectors, as non-marine shells are often not included in their reference books. Cultural significance Currency Seashells have been used as a medium of exchange in various places, including many Indian Ocean and Pacific Ocean islands, also in North America, Africa and the Caribbean. The most common species of shells to be used as currency have been Monetaria moneta, the "money cowry", and certain dentalium tusk shells, used in North Western North America for many centuries. Many of the tribes and nations all across the continent of Africa have historically used the cowry as their media of exchange. The cowry circulated, historically, alongside metal coins and goods, and foreign currencies. Being durable and easy to carry the cowry made a very favorable currency. Some tribes of the indigenous peoples of the Americas used shells for wampum and hair pipes. The Native American wampum belts were made of the shell of the quahog clam. Tools Seashells have often been used as tools, because of their strength and the variety of their shapes. Giant clams (Family Tridacnidae) have been used as bowls, and when big enough, even as bathtubs and baptismal fonts. Melo melo, the "bailer volute", is so named because Native Australians used it to bail out their canoes. Many different species of bivalves have been used as scrapers, blades, clasps, and other such tools, due to their shape. Some marine gastropods have been used for oil lamps, the oil being poured in the aperture of the shell, and the siphonal canal serving as a holder for the wick. Horticulture Because seashells are in some areas a readily available bulk source of calcium carbonate, shells such as oyster shells are sometimes used as soil conditioners in horticulture. The shells are broken or ground into small pieces in order to have the desired effect of raising the pH and increasing the calcium content in the soil. Religion and spirituality Seashells have played a part in religion and spirituality, sometimes even as ritual objects. In Christianity, the scallop shell is considered to be the symbol of Saint James the Great, see Pecten jacobaeus. In Hinduism, left-handed shells of Turbinella pyrum (the sacred shankha) are considered to be sacred to the god Vishnu. The person who finds a left-handed chank shell (one that coils to the left) is sacred to Vishnu, as well. The chank shell also plays an important role in Buddhism. Cowries have often been considered to be symbols of female fertility. They were often treated as actual fertility charms. The dorsum of the shell resembles a pregnant belly, and the underside of the shell resembles a vulva. In the South Indian state of Kerala, cowries are used for making astrological predictions. In the Santería religion, shells are used for divination. The Moche culture of ancient Peru worshipped animals and the sea, and often depicted shells in their art. In Christianity, the top of the sand dollar represents the Star of Bethlehem that led the Wise Men to the manger of Christ. Outside the "star" you will see the Easter Lily, a sign of Jesus' Resurrection. There are four holes that represent the holes in the Lord's hands and feet. The center hole is the Wound to His Sacred Heart by the spear of Longinus. On the other side of the sand dollar, you will see Poinsettia. Lastly, if you break open the sand dollar, five doves will come out, the doves of Peace and Joy. Musical instruments Seashells have been used as musical instruments, wind instruments for many hundreds if not thousands of years. Most often the shells of large sea snails are used, as trumpets, by cutting a hole in the spire of the shell or cutting off the tip of the spire altogether. Various different kinds of large marine gastropod shells can be turned into "blowing shells"; however, the most commonly encountered species used as "conch" trumpets are: The sacred chank, Turbinella pyrum, known in India as the shankha. In Tibet it is known as "dung-dkar". The Triton shell also known as "Triton's trumpet" Charonia tritonis which is used as a trumpet in Melanesian and Polynesian culture and also in Korea and Japan. In Japan this kind of trumpet is known as the horagai. In Korea it is known as the nagak. In some Polynesian islands it is known as "pu". The Queen Conch Lobatus gigas, has been used as a trumpet in the Caribbean. Children in some cultures are often told the myth that you can hear the sound of the ocean by holding a seashell to ones ear. This is due to the effect of seashell resonance. Personal adornment Whole seashells or parts of sea shells have been used as jewelry or in other forms of adornment since prehistoric times. Mother of pearl was historically primarily a seashell product, although more recently some mother of pearl comes from freshwater mussels. Also see pearl. Shell necklaces have been found in Stone Age graves as far inland as the Dordogne Valley in France. Seashells are often used whole and drilled, so that they can be threaded like beads, or cut into pieces of various shapes. Sometimes shells can be found that are already "drilled" by predatory snails of the family Naticidae. Fine whole shell necklaces were made by Tasmanian Aboriginal women for more than 2,600 years. The necklaces represent a significant cultural tradition which is still practised by Palawa women elders. The shells used include pearly green and blue-green maireener (rainbow kelp) shells, brown and white rice shells, black cats' teeth shells and pink button shells. Naturally-occurring, beachworn, cone shell "tops" (the broken-off spire of the shell, which often has a hole worn at the tip) can function as beads without any further modification. In Hawaii these natural beads were traditionally collected from the beach drift in order to make puka shell jewelry. Since it is hard to obtain large quantities of naturally-occurring beachworn cone tops, almost all modern puka shell jewelry uses cheaper imitations, cut from thin shells of other species of mollusk, or even made of plastic. Shells historically have been and still are made into, or incorporated into, necklaces, pendants, beads, earrings, buttons, brooches, rings, hair combs, belt buckles and other uses. The shell of the large "bullmouth helmet" sea snail, scientific name Cypraecassis rufa, was historically, and still is, used to make valuable cameos. Mother of pearl from many seashells including species in the family Trochidae, Turbinidae, Haliotidae, and various pearly bivalves, has often been used in jewelry, buttons, etc. In London, Pearly Kings and Queens traditionally wear clothing covered in patterns made up of hundreds of "pearl buttons", in other words, buttons made of mother-of-pearl or nacre. In recent years however, the majority of "pearl buttons" are imitations that are made of pearlescent plastic. Creating Crafts "Sailor's Valentines" were late 19th-century decorative keepsakes which were made from the Caribbean, and which were often purchased by sailors to give to their loved ones back home for example in England. These valentines consisted of elaborate arrangements of small seashells glued into attractive symmetrical designs, which were encased on a wooden (usually octagonal) hinged box-frame. The patterns used often featured heart-shaped designs, or included a sentimental expression of love spelled out in small shells. The making of shell work artifacts is a practice of Aboriginal women from La Perouse in Sydney, dating back to the 19th century. Shell work objects include baby shoes, jewelry boxes and replicas of famous landmarks, including the Sydney Harbour Bridge and the Sydney Opera House. The shellwork tradition began as an Aboriginal women's craft which was adapted and tailored to suit the tourist souvenir market, and which is now considered high art. Architectural decoration Small pieces of colored and iridescent shell have been used to create mosaics and inlays, which have been used to decorate walls, furniture and boxes. Large numbers of whole seashells, arranged to form patterns, have been used to decorate mirror frames, furniture and human-made shell grottos. Art A very large outdoor sculpture at Akkulam of a gastropod seashell is a reference to the sacred chank shell Turbinella pyrum of India. In 2003, Maggi Hambling designed a striking 13 ft (4 m) high sculpture of a scallop shell which stands on the beach at Aldeburgh, in England. The goddess of love, Venus or Aphrodite, is often traditionally depicted rising from the sea on a seashell. In The Birth of Venus, Botticelli depicted the goddess Venus rising from the ocean on a scallop shell. Poultry feeds Sea shells found in the creek and backwater of the coast of west India are used as an additive to poultry feed. They are crushed and mixed with jowar maize and dry fish. Use Seashells, namely from bivalves and gastropods, are fundamentally composed of calcium carbonate. In this sense, they have potential to be used as raw material in the production of lime. Along the Gulf Coast of the United States, oyster shells were mixed into cement to make "shellcrete" which could form bricks, blocks and platforms. It could also be applied over logs. A notable example is the 19th-century Sabine Pass Lighthouse in Louisiana, near Texas. Shells of other marine invertebrates Arthropods Many arthropods have sclerites, or hardened body parts, which form a stiff exoskeleton made up mostly of chitin. In crustaceans, especially those of the class Malacostraca (crabs, shrimps and lobsters, for instance), the plates of the exoskeleton may be fused to form a more or less rigid carapace. Moulted carapaces of a variety of marine malacostraceans often wash up on beaches. The horseshoe crab is an arthropod of the family Limulidae. The shells or exuviae of these arachnid relatives are common in beach drift in certain areas of the world. Echinoderms Some echinoderms such as sea urchins, including heart urchins and sand dollars, have a hard "test" or shell. After the animal dies, the flesh rots out and the spines fall off, and then fairly often the empty test washes up whole onto a beach, where it can be found by a beachcomber. These tests are fragile and easily broken into pieces. Brachiopods The brachiopods, or lamp shells, superficially resemble clams, but the phylum is not closely related to mollusks. Most lines of brachiopods ended during the Permian-Triassic extinction event, and their ecological niche was filled by bivalves. A few of the remaining species of brachiopods occur in the low intertidal zone and thus can be found live by beachcombers. Annelids Some polychaetes, marine annelid worms in the family Serpulidae, secrete a hard tube made of calcium carbonate, adhering to stones or other shells. This tube resembles, and can be confused with, the shell of marine gastropod mollusks in the family Vermetidae, the worm snails. Atypical shells A few other categories of marine animals leave remains which might be considered "seashells" in the widest possible sense of the word. Chelonians Sea turtles have a carapace and plastron of bone and cartilage which is developed from their ribs. Infrequently a turtle "shell" will wash up on a beach. Hard corals Pieces of the hard skeleton of corals commonly wash up on beaches in areas where corals grow. The construction of the shell-like structures of corals are aided by a symbiotic relationship with a class of algae, zooxanthellae. Typically a coral polyp will harbor particular species of algae, which will photosynthesise and thereby provide energy for the coral and aid in calcification, while living in a safe environment and using the carbon dioxide and nitrogenous waste produced by the polyp. Coral bleaching is a disruption of the balance between polyps and algae, and can lead to the breakdown and death of coral reefs. Soft corals The skeletons of soft corals such as gorgonians, also known as sea fans and sea whips, commonly wash ashore in the tropics after storms. Plankton and protists Plant-like diatoms and animal-like radiolarians are two forms of plankton which form hard silicate shells. Foraminifera and coccolithophores create shells known as "tests" which are made of calcium carbonate. These shells and tests are usually microscopic in size, though in the case of foraminifera, they are sometimes visible to the naked eye, often resembling miniature mollusk shells. See also Bailey-Matthews Shell Museum Marine biogenic calcification Mollusk shell Ocean acidification Seashell resonance Seashell surface, a mathematical construct Shell growth in estuaries Shell purse Small shelly fauna References Citations Sources Books Abbott R. Tucker & S. Peter Dance, 1982, Compendium of Seashells, A full color guide to more than 4,200 of the World's Marine shells, E.P. Dutton, Inc, New York, . Abbott R. Tucker, 1985, Seashells of the World: a guide to the better-known species, 1985, Golden Press, New York, . Abbott, R. Tucker, 1986, Seashells of North America, St. Martin's Press, New York, . Abbott, R. Tucker, 1974, American Seashells, Second edition, Van Nostrand Rheinhold, New York, . External links Hohlman Shell Collection, Florida Institute of Technology Beautiful Shells (1856) by H. G. Adams Zoology Mollusc shells Mollusc products Collecting Articles containing video clips
Seashell
Biology
5,762
23,208,230
https://en.wikipedia.org/wiki/Meter%20data%20management
Meter data management (MDM) refers to software that performs long-term data storage and management for the vast quantities of data delivered by smart metering systems. This data consists primarily of usage data and events that are imported from the head-end servers managing the data collection in advanced metering infrastructure (AMI) or automatic meter reading (AMR) systems. MDM is a component in the smart grid infrastructure promoted by utility companies. This may also incorporate meter data analytics, the analysis of data emitted by electric smart meters that record consumption of electric energy. MDM systems An MDM system will typically import the data, then validate, cleanse and process it before making it available for billing and analysis. Products for meter data include: Smart meter deployment planning and management; Meter and network asset monitoring and management; Automated smart meter provisioning (i.e. addition, deletion and updating of meter information at utility and AMR side) and billing cutover; Meter-to-cash system, workforce management system, asset management and other systems. Furthermore, an MDM may provide reporting capabilities for load and demand forecasting, management reports, and customer service metrics. An MDM provide application programming interfaces (APIs) between the MDM and the multiple destinations that rely on meter data. This is the first step to ensure that consistent processes and 'understanding' get applied to the data. Besides this common functionality, an advanced MDM may provide facility for remote connect/disconnect of meters, power status verification/power restoration verification and on demand read of remote meters. Data analysis Smart meters send usage data to the central head end systems as often as every minute from each meter whether installed at a residential or a commercial or an industrial customer. Utility companies sometimes analyze this voluminous data as well as collect it. Some of the reasons for analysis are to make efficient energy buying decisions based on the usage patterns, launching energy efficiency or energy rebate programs, energy theft detection, comparing and correcting metering service provider performance, and detecting and reducing unbilled energy. This data not only helps utility companies make their businesses more efficient, but also helps consumers save money by using less energy at peak times. So, it is both economical and green. Smart meter infrastructure is fairly new to Utilities industry. As utility companies collect more and more data over the years, they may uncover further uses to these detailed smart meter activities. Similar analysis can be applied to water and gas as well as electric usage. According to a 2012 web posting, data that is required for complete meter data analytics may not reside in the same database. Instead, it might reside in disparate databases among various departments of utility companies. See also Automatic meter reading Advanced metering infrastructure Smart grid Smart meter References Flow meters Public services Data analysis
Meter data management
Chemistry,Technology,Engineering
568
20,895,383
https://en.wikipedia.org/wiki/Spiffits
Spiffits are a group of branded products, launched in April 1989 as the first complete line of pre-moistened household cleaning towelettes. The line included a glass cleaner, furniture polish, a soft scouring cleanser, bathroom cleaner and a multi-surface cleaner and competed against similar individual products sold under the Lysol, Pledge, Clorox and Endust brands. The Spiffits towelettes were manufactured by DowBrands, the consumer products division of the Dow Chemical Company at the time. The product was launched with an $18 million-dollar advertising campaign developed by Henderson Advertising, of Greenville, South Carolina. The ad campaign featured animated Spiffits "spokesboxes", produced using single-frame stop-motion filming techniques and moldable rubber box puppets, considered an innovative animation technique at the time. Ironically, Spiffits, DowBrands, and Henderson Advertising have all disappeared from the American marketing landscape, however the legacy of pre-moistened cleaning towels has flourished and become a staple in many of America's homes today. In 1998, Dow disposed of the primary components of its consumer products division to S. C. Johnson & Son, Inc. for US$1.125 billion. The transaction included the intellectual properties associated with the division, including the Spiffits trademarks. Sources Advertising Age, February 19, 1990; M. Klein, former Henderson Advertising executive; The Dow Chemical Company, Form 10-K filed 24 March 1999. . Cleaning products
Spiffits
Chemistry
304
73,558,859
https://en.wikipedia.org/wiki/Branch%20number
In cryptography, the branch number is a numerical value that characterizes the amount of diffusion introduced by a vectorial Boolean function that maps an input vector to output vector . For the (usual) case of a linear the value of the differential branch number is produced by: applying nonzero values of (i.e., values that have at least one non-zero component of the vector) to the input of ; calculating for each input value the Hamming weight (number of nonzero components), and adding weights and together; selecting the smallest combined weight across for all nonzero input values: . If both and have components, the result is obviously limited on the high side by the value (this "perfect" result is achieved when any single nonzero component in makes all components of to be non-zero). A high branch number suggests higher resistance to the differential cryptanalysis: the small variations of input will produce large changes on the output and in order to obtain small variations of the output, large changes of the input value will be required. The term was introduced by Daemen and Rijmen in early 2000s and quickly became a typical tool to assess the diffusion properties of the transformations. Mathematics The branch number concept is not limited to the linear transformations, Daemen and Rijmen provided two general metrics: differential branch number, where the minimum is obtained over inputs of that are constructed by independently sweeping all the values of two nonzero and unequal vectors , ( is a component-by-component exclusive-or): ; for linear branch number, the independent candidates and are independently swept; they should be nonzero and correlated with respect to (the coefficient of the linear approximation table of should be nonzero): . References Sources Cryptography
Branch number
Mathematics,Engineering
364
5,135,754
https://en.wikipedia.org/wiki/Fej%C3%A9r%27s%20theorem
In mathematics, Fejér's theorem, named after Hungarian mathematician Lipót Fejér, states the following: Explanation of Fejér's Theorem's Explicitly, we can write the Fourier series of f as where the nth partial sum of the Fourier series of f may be written as where the Fourier coefficients are Then, we can define with Fn being the nth order Fejér kernel. Then, Fejér's theorem asserts that with uniform convergence. With the convergence written out explicitly, the above statement becomes Proof of Fejér's Theorem We first prove the following lemma: Proof: Recall the definition of , the Dirichlet Kernel:We substitute the integral form of the Fourier coefficients into the formula for above Using a change of variables we get This completes the proof of Lemma 1. We next prove the following lemma: Proof: Recall the definition of the Fejér Kernel As in the case of Lemma 1, we substitute the integral form of the Fourier coefficients into the formula for This completes the proof of Lemma 2. We next prove the 3rd Lemma: Proof: a) Given that is the mean of , the integral of which is 1, by linearity, the integral of is also equal to 1. b) As is a geometric sum, we get an simple formula for and then for ,using De Moivre's formula : c) For all fixed , This shows that the integral converges to zero, as goes to infinity. This completes the proof of Lemma 3. We are now ready to prove Fejér's Theorem. First, let us recall the statement we are trying to prove We want to find an expression for . We begin by invoking Lemma 2: By Lemma 3a we know that Applying the triangle inequality yields and by Lemma 3b, we get We now split the integral into two parts, integrating over the two regions and . The motivation for doing so is that we want to prove that . We can do this by proving that each integral above, integral 1 and integral 2, goes to zero. This is precisely what we'll do in the next step. We first note that the function f is continuous on [-π,π]. We invoke the theorem that every periodic function on [-π,π] that is continuous is also bounded and uniformily continuous. This means that . Hence we can rewrite the integral 1 as follows Because and By Lemma 3a we then get for all n This gives the desired bound for integral 1 which we can exploit in final step. For integral 2, we note that since f is bounded, we can write this bound as We are now ready to prove that . We begin by writing Thus,By Lemma 3c we know that the integral goes to 0 as n goes to infinity, and because epsilon is arbitrary, we can set it equal to 0. Hence , which completes the proof. Modifications and Generalisations of Fejér's Theorem In fact, Fejér's theorem can be modified to hold for pointwise convergence. Sadly however, the theorem does not work in a general sense when we replace the sequence with . This is because there exist functions whose Fourier series fails to converge at some point. However, the set of points at which a function in diverges has to be measure zero. This fact, called Lusins conjecture or Carleson's theorem, was proven in 1966 by L. Carleson. We can however prove a corollary relating which goes as follows: A more general form of the theorem applies to functions which are not necessarily continuous . Suppose that f is in L1(-π,π). If the left and right limits f(x0±0) of f(x) exist at x0, or if both limits are infinite of the same sign, then Existence or divergence to infinity of the Cesàro mean is also implied. By a theorem of Marcel Riesz, Fejér's theorem holds precisely as stated if the (C, 1) mean σn is replaced with (C, α) mean of the Fourier series . References . Fourier series Theorems in approximation theory
Fejér's theorem
Mathematics
845
34,826,296
https://en.wikipedia.org/wiki/Doxing
Doxing, also spelled doxxing, is the act of publicly providing personally identifiable information about an individual or organization, usually via the Internet and without their consent. Historically, the term has been used to refer to both the aggregation of this information from public databases and social media websites (like Facebook), and the publication of previously private information obtained through criminal or otherwise fraudulent means (such as hacking and social engineering). The aggregation and provision of previously published material is generally legal, though it may be subject to laws concerning stalking and intimidation. Doxing may be carried out for reasons such as online shaming, extortion, and vigilante aid to law enforcement. It also may be associated with hacktivism. Etymology "Doxing" is a neologism. It originates from a spelling alteration of the abbreviation "docs", for "documents", and refers to "compiling and releasing a dossier of personal information on someone". Essentially, doxing is revealing and publicizing the records of an individual, which were previously private or difficult to obtain. The term dox derives from the slang "dropping dox", which, according to Wired contributor Mat Honan, was "an old-school revenge tactic that emerged from hacker culture in 1990s". Hackers operating outside the law in that era used the breach of an opponent's anonymity as a means to expose them to harassment or legal repercussions. Consequently, doxing often carries a negative connotation because it can be a means of revenge via the violation of privacy. History The practice of publishing personal information about individuals as a form of vigilantism predates the Internet, via physical media such as newspapers and pamphlets. For example, in response to the Stamp Act 1765 in the Thirteen Colonies, radical groups such as the Sons of Liberty harassed tax collectors and those who did not comply with boycotts on British goods by publishing their names in pamphlets and newspaper articles. Outside of hacker communities, the first prominent examples of doxing took place on internet discussion forums on Usenet in the late 1990s, including users circulating lists of suspected neo-Nazis, later racists. Also in the late 1990s, a website called the Nuremberg Files had launched, featuring the home addresses of abortion providers and language that implied website visitors should stalk and kill the people listed. In 2012, when then-Gawker reporter Adrian Chen revealed the identity of Reddit troll Violentacrez as Michael Brutsch, Reddit users accused Chen of doxing Brutsch and declared "war" on Gawker. In the mid-2010s, the events of the Gamergate harassment campaign brought the term into wider public use. Participants in Gamergate became known for releasing sensitive information about their targets to the public, sometimes with the intent of causing the targets in question physical harm. Caroline Sinders, a research fellow at the Center for Democracy and Technology, said that "Gamergate, for a lot of people, for mainstream culture, was the introduction to what doxxing is". According to The Atlantic, from 2014 to 2020, "the doxxing conversation was dominated by debate around whether unmasking a pseudonymous person with a sizable following was an unnecessary and dangerous invasion of their privacy." In 2014, when Newsweek attempted to search for the pseudonymous developer of Bitcoin, the magazine was accused of doxing by cryptocurrency enthusiasts. In 2016, when an Italian journalist attempted to search for the identity of the pseudonymous Italian novelist Elena Ferrante, the journalist was accused of gendered harassment and Vox referred to the search as "the doxxing of Elena Ferrante." In 2020, when The New York Times indicated that it was planning on publishing the real name of the California psychiatrist running the Slate Star Codex blog, fans of the blog accused the Times of doxing. The person behind the blog accused the Times of threatening his safety and claimed that he started a "major scandal" that resulted in the Times losing hundreds or thousands of subscriptions. In 2022, BuzzFeed News reporter Katie Notopoulos used public business records to identify the previously pseudonymous founders of the Bored Ape Yacht Club. Greg Solano, one of the founders of the club, claimed that he "got doxxed against [his] will". In April 2022, The Washington Post reporter Taylor Lorenz revealed the identity of the person behind the Twitter account Libs of TikTok as Chaya Raichik, who works in real estate. This resulted in Raichik and right-wingers accusing Lorenz of doxing. Pro-Israel NGOs including the Israel on Campus Coalition and Canary Mission have been accused of doxing Palestinian activists by releasing public dossiers through flyers and their websites. The Israel-Hamas war saw a surge in doxing activities in the United States. Right wing advocacy group Accuracy in Media sent doxing trucks to Yale University and Columbia University, displaying the names and faces of students deemed anti-Israel under a banner labeling them "leading antisemites" on campus. Similarly, Canary Mission published the identities and images of Harvard University students involved in the circulation of an open letter, published on October 7th, that held "the Israeli regime entirely responsible for all unfolding violence". Doxware Doxware is a cryptovirology attack invented by Adam Young and further developed with Moti Yung that carries out doxing extortion via malware. It was first presented at West Point in 2003. The attack is rooted in game theory and was originally dubbed "non-zero-sum games and survivable malware". The attack is summarized in the book Malicious Cryptography as follows: The attack differs from the extortion attack in the following way. In the extortion attack, the victim is denied access to its own valuable information and has to pay to get it back, where in the attack that is presented here the victim retains access to the information but its disclosure is at the discretion of the computer virus. Doxware is the converse of ransomware. In a ransomware attack (originally called cryptoviral extortion), the malware encrypts the victim's data and demands payment to provide the needed decryption key. In the doxware cryptovirology attack, the attacker or malware steals the victim's data and threatens to publish it unless a fee is paid. Common techniques Once people have been exposed through doxing, they may be targeted for harassment through methods such as actual harassment in person, fake signups for mail subscriptions, food deliveries, bombarding the address with letters, or through “swatting”—the intentional dispatching of armed police teams (S.W.A.T.) to a person's address via falsely reported tips or through fake emergency services phone calls. The act of reporting a false tip to police—and the subsequent summoning of an emergency response team (ERT)—is an illegal, punishable offense in most jurisdictions, due to ERTs being compromised and potentially unavailable for real emergencies. It is, at the very least, an infraction in most US states (for first-time offenders); if multiple attempts are made, the charge increases to a misdemeanor (especially when the intention is harassment-based). Further repercussions include fines ranging from as low as US$50 up to US$2,000, six months spent in county jail, or both the fine and imprisonment. A hacker may obtain an individual's dox without making the information public. A hacker may look for this information to extort or coerce a known or unknown target. A hacker may also harvest a victim's information to break into their Internet accounts or take over their social media accounts. Doxing has also occurred in dating apps. In a survey conducted in 2021, 16% of respondents reported suffering doxing because of them. In a 2018 qualitative study about intimate partner violence, 28 out of 89 participants (both professionals and survivors) reported the exposure of the victim's private information to third parties through digital technologies as a form of humiliation, shaming or harm frequently practiced by abusers, that may include the disclosure of intimate images and impersonation of the victim. Victims may also be shown their details as proof that they have been doxed as a form of intimidation. The perpetrator may use this fear to gain power over victims in order to extort or coerce. Doxing is therefore a standard tactic of online harassment and has been used by people associated with the Gamergate and vaccine controversies. There are different motivations for doxing. They include doing it to reveal harmful behavior and hold the offender accountable. Others use it to embarrass, scare, threaten, or punish someone. It's also often used for cyberstalking, which could result in making someone fear for their safety. Researchers have pointed out that some instances of doxing can be justified, such as when it reveals harmful behavior, but only if the act of doxing also aligns with the public. Anti-doxing services Parallel to the rise of doxing has been the evolution of cybersecurity, internet privacy, the Online Privacy Alliance, and companies that provide anti-doxing services. Most recently, high-profile groups like the University of California Berkeley have made online guidance for protecting its community members from doxing. Wired published an article on dealing with doxing, in which Eva Galperin, from the Electronic Frontier Foundation, advised people to "Google yourself, lock yourself down, make it harder to access information about you." Legislation Australia In 2024, the Australian government announced they would introduce new legislation to criminalise doxing due to an incident in which the personal details of over 600 people from a WhatsApp group of Jewish Australians was leaked. Some of the people whose details were leaked received threats to harm their reputation as well as death threats. The proposed legislation, which includes a law that makes doxing punishable by jail time, has received bipartisan support, and support from Prime Minister Anthony Albanese. Austria In 2006 Austria passed its anti-stalking law, and in 2016 cyber-mobbing became a criminal offense. While as of the end of 2024 doxing is no specific offense, the laws mentioned are used in cases of online violence. Since Austria is an EU-member state, EU law (DSGVO) applies. Mainland China From March 1, 2020, the People's Republic of China's "Regulations on the Ecological Governance of Online Information Content" has been implemented, clarifying that users and producers of online information content services and platforms must not engage in online violence, doxing, deep forgery, data fraud, account manipulation and other illegal activities. Hong Kong As of 2021, it is a criminal offense in Hong Kong to dox, where doxing is defined as releasing private or non-public information on a person for the purposes of "threatening, intimidation, harassment or to cause psychological harm". Persons convicted under this statute are liable to imprisonment for up to 5 years, and a fine of HK$1,000,000 (US$128,324.40). Germany In Germany, doxing was added to the criminal code in September 2021 as (Section 126a of the Criminal Code). Since then, the publication of freely accessible data is punishable by a prison sentence of up to two years or a fine, and the publication of data that is not freely accessible is punishable by a prison sentence of up to three years or a fine. The dissemination of the data and its content must be suitable and, under the circumstances, intended to expose the person concerned or persons close to them to a crime directed against them or another unlawful act against sexual self-determination, physical integrity, personal freedom or against an object of significant value. By referring to Section 86 of the Criminal Code, which criminalizes the , the Endangering Dissemination of Personal Data is not punishable if it is socially appropriate and "the act serves civic education, the defense against unconstitutional efforts, art or science, research or teaching, the reporting on current events or history or similar purposes" (Section 86 Paragraph 4 of the Criminal Code). Netherlands In 2021, due to increasing doxing incidents targeting Dutch activists, politicians, journalists and others, a new law against doxing was proposed by then Minister of Justice and Security Ferdinand Grapperhaus. The law states it is a felony to share personal data with the intent of intimidation, harassment or work-hindering and carries a maximum penalty of a two-years prison sentence or a fine of €25,750 (US$28,204). The penalty shall be increased by a third when targeted at certain public figures. The proposed law passed both houses of parliament and went into effect on 1 January 2024. Early in 2025 the War in Court project digitally released a list of names of nearly half a million suspected wartime Nazi collaborators. Russia Under the Article 137 "Invasion of Personal Privacy" public sharing of personal information, using mass media, Internet, even public events, is considered a crime and shall be punishable by a fine of up to eighteen months in wage, or by compulsory labor for a term of up to three hundred sixty hours, or by corrective labor for a term of up to one year, or by forced labor for a term of up to two years. with deprivation of the right to hold certain positions or engage in certain activities for a term of up to three years or without it, or arrest for a term of up to four months, or imprisonment for a term of up to two years with deprivation of the right to hold certain positions or engage in certain activities for a term of up to three years. Copying the information and obtaining it illegally are separate offences as well. South Korea South Korea is one of the few countries with a criminal statute that specifically addresses doxing. Article 49 of "Act on promotion of information and communications network utilization, and information protection" prohibits unlawful collection and dissemination of private information such as full name, birth date, address, likeliness, and any other information that is deemed sufficient to identify specific person(s) when viewed in summation, regardless of intent. In practice, however, due to the ambiguous nature of "unlawful collection" of private information in the statute, legal actions are often based upon article 44 from the same act, which prohibits insulting an individual with derogatory or profane language, and defamation of an individual through the dissemination of either misinformation or privileged factual information that may potentially damage an individual's reputation or honor (which often occurs in a doxing incident). This particular clause enforces harsher maximum sentences than a "traditional" defamation statute existing in the Korean criminal code. It was originally enacted partially in response to the rise in celebrity suicides due to cyberbullying. Spain The Spanish Criminal Code regulates penalties for the discovery and revelation of secrets in articles 197 to 201. It establishes, in its article 197 § 1, that "whoever, in order to discover the secrets or violate the privacy of another, without their consent, seizes their papers, letters, e-mail messages or any other documents or personal effects, intercepts their telecommunications or uses technical devices for listening, transmission, recording or reproduction of sound or image, or any other communication signal, shall be punished with prison sentences of one to four years and a fine of twelve to twenty-four months". Per article 197 § 2, the same penalty punishes those who "seize, use or modify, to the detriment of a third party, reserved personal or family data of another that is registered in computer, electronic or telematic files or media, or in any other type of file or public or private record". Those who "disseminate, disclose or transfer" the aforementioned data to third parties face a penalty of two to five prison years (one to three years of prison and fines of twelve to twenty-four months, if not directly involved in their discovery but "with knowledge of its illicit origin"). These offenses are particularly severe if made by the person responsible of the respective files, media, records or archives or through unauthorized use of personal data, if revealing of the ideology, religion, beliefs, health, racial origin or sexual life of the victim, if the victim is underage or disabled, and if it is made for economic profit. As established by the Criminal Code's reform in 2015, to "disseminate, disclose or transfer to third parties images or audiovisual recordings of the one obtained with their consent in a home or in any other place out of sight of third parties, when the disclosure seriously undermines the personal privacy of that person", without the authorization of the affected person, is also punished per article 197 § 7 to three months to a year in prison and fines of six to twelve months. The offense is particularly severe if the victim is linked to the offender by marriage or an "analogous affective relationship", underage, or disabled. United States In the United States, there are few legal remedies for the victims of doxing. Two federal laws exist that could potentially address the problem of doxing: the Interstate Communications Statute and the Interstate Stalking Statute. However, as one scholar has argued, "[t]hese statutes ... are woefully inadequate to prevent doxing because their terms are underinclusive and they are rarely enforced". The Interstate Communications Statute, for example, "only criminalizes explicit threats to kidnap or injure a person". But in many instances of doxing, a doxer may never convey an explicit threat to kidnap or injure, but the victim could still have good reason to be terrified. And the Interstate Stalking Statute "is rarely enforced and it serves only as a hollow protection from online harassment". According to at least one estimate, over three million people are stalked over the internet each year, yet only about three are charged under the Interstate Stalking Statute. Accordingly, "[t]his lack of federal enforcement means that the States must step in if doxing is to be reduced." In late 2023 and early 2024, during a rash of swatting of American politicians, it became widely used as a way of encouraging attacks, as the United States possesses weak laws surrounding data privacy, with its citizens' personal information often easily accessible online due to various data brokers. See also Data re-identification Doomscrolling Doxbin Escrache Identity theft Opposition research Outing Skiptrace References Sources External links Cyberbullying Cybercrime Cyberstalking Data security Hacking (computer security) Internet privacy Internet terminology Internet vigilantism Identity documents Privacy controversies
Doxing
Technology,Engineering
3,826
23,289,685
https://en.wikipedia.org/wiki/Moo%20box
The moo box or moo can is a toy or a souvenir, also used as a hearing test. When turned upside down, it produces a noise that resembles the mooing of a cow. The toy can be configured to create other animal sounds such as the meow of a cat, the chirp of a bird, or the bleat of a sheep. Construction The moo box consists of a block and a bellows. The bellows is sealed to the bottom of the box and to the block. The block is heavy and perforated, and used to actuate the bellows, producing the sound. When the box is inverted, the block falls away from the bottom, filling the bellows with air. When the box is turned right side up, the air is expelled through a vibrating blade (which makes it a free reed instrument) producing the sound. After passing the blade, the air passes through a duct of variable length, which determines the pitch of the sound. Moatti test The toy can be used to perform the Moatti test to test infants' hearing at different frequencies, conceived by doctor Lucien Moatti. It uses four boxes at different frequencies, all calibrated to generate a sound pressure level (loudness) of sixty decibels at two metres. The test can be used to screen the hearing of children aged from six to 24 months. The tester knocks down the boxes out of sight of the child. If the child hears the sound, they will turn their head towards it. Notable appearances in pop culture A moo box was used in the Beastie Boys' track "B-boys Makin' With The Freak Freak", from their 1994 album Ill Communication. Michael Scott confuses a radon detector for a moo box in the cold open of The Office episode, "The Chump". In the French film Les Couloirs du temps : Les Visiteurs II (1998), a man from the Modern Day is accidentally transported to the Middle Ages and brought with him souvenirs from the future. While being interrogated by villagers, one of them grabs a moo box and believe the toy to be a work of witchcraft to which they have him tied to a stake to be burned. The Kube brothers make moo boxes in the post-apocalyptic French film Delicatessen (1991), directed by Jean-Pierre Jeunet and Marc Caro. In the THX Tex 2: Moo Can trailer (better known as just Tex 2), a robot mascot named Tex of the American audio company THX uses a moo can to perform a Deep Note mooed by cows. This trailer first premiered with the original theatrical release of Alien Resurrection in November 1997. It was seen on Pixar and 20th Century Fox DVDs (1997-2005). In Invader Zim, the secretary for the school nurse is seen using the toy in the controversial episode "Dark Harvest". In the 2005 film Constantine the main character, John Constantine, trades a moo box designed to imitate the bleat of a sheep with his friend and ally Beeman in exchange for holy objects and weapons. Later, after Beeman's death, many moo boxes can be seen in Beeman's office in the bowling alley, indicating he was a collector. Moo Can was seen in the Despicable Me trailer, where two Minions are playing with a toy. See also Groan Tube References Mechanical toys Toy instruments and noisemakers
Moo box
Physics,Technology
712
1,884,226
https://en.wikipedia.org/wiki/Spectrofluorometer
A spectrofluorometer is an instrument which takes advantage of fluorescent properties of some compounds in order to provide information regarding their concentration and chemical environment in a sample. A certain excitation wavelength is selected, and the emission is observed either at a single wavelength, or a scan is performed to record the intensity versus wavelength, also called an emission spectrum. The instrument is used in fluorescence spectroscopy. Operation Generally, spectrofluorometers use high intensity light sources to bombard a sample with as many photons as possible. This allows for the maximum number of molecules to be in an excited state at any one point in time. The light is either passed through a filter, selecting a fixed wavelength, or a monochromator, which allows a wavelength of interest to be selected for use as the exciting light. The emission is collected at the perpendicular to the emitted light. The emission is also either passed through a filter or a monochromator before being detected by a photomultiplier tube, photodiode, or charge-coupled device detector. The signal can either be processed as digital or analog output. Systems vary greatly and a number of considerations affect the choice. The first is the signal-to-noise ratio. There are many ways to look at the signal to noise of a given system but the accepted standard is by using the Raman signal of water. Sensitivity or detection limit is another specification to be considered, that is how little light can be measured. The standard would be fluorescein in NaOH, typical values for a high end instrument are in the femtomolar range. Auxiliary components These systems come with many options, including: Polarizers Peltier temperature controllers Cryostats Cold Finger Dewars Pulsed lasers for lifetime measurements LEDs for lifetimes Filter holders Adjustable optics (very important) Solid sample holders Slide holders Integrating spheres Near-infrared detectors Bilateral slits Manual slits Computer controlled slits Fast switching monochromators Filter wheels References Laboratory equipment Spectrometers
Spectrofluorometer
Physics,Chemistry
408
143,533
https://en.wikipedia.org/wiki/Green%20fluorescent%20protein
The green fluorescent protein (GFP) is a protein that exhibits green fluorescence when exposed to light in the blue to ultraviolet range. The label GFP traditionally refers to the protein first isolated from the jellyfish Aequorea victoria and is sometimes called avGFP. However, GFPs have been found in other organisms including corals, sea anemones, zoanithids, copepods and lancelets. The GFP from A. victoria has a major excitation peak at a wavelength of 395 nm and a minor one at 475 nm. Its emission peak is at 509 nm, which is in the lower green portion of the visible spectrum. The fluorescence quantum yield (QY) of GFP is 0.79. The GFP from the sea pansy (Renilla reniformis) has a single major excitation peak at 498 nm. GFP makes for an excellent tool in many forms of biology due to its ability to form an internal chromophore without requiring any accessory cofactors, gene products, or enzymes / substrates other than molecular oxygen. In cell and molecular biology, the GFP gene is frequently used as a reporter of expression. It has been used in modified forms to make biosensors, and many animals have been created that express GFP, which demonstrates a proof of concept that a gene can be expressed throughout a given organism, in selected organs, or in cells of interest. GFP can be introduced into animals or other species through transgenic techniques, and maintained in their genome and that of their offspring. GFP has been expressed in many species, including bacteria, yeasts, fungi, fish and mammals, including in human cells. Scientists Roger Y. Tsien, Osamu Shimomura, and Martin Chalfie were awarded the 2008 Nobel Prize in Chemistry on 10 October 2008 for their discovery and development of the green fluorescent protein. Most commercially available genes for GFP and similar fluorescent proteins are around 730 base-pairs long. The natural protein has 238 amino acids. Its molecular mass is 27 kD. Therefore, fusing the GFP gene to the gene of a protein of interest can significantly increase the protein's size and molecular mass, and can impair the protein's natural function or change its location or trajectory of transport within the cell. Background Wild-type GFP (wtGFP) In the 1960s and 1970s, GFP, along with the separate luminescent protein aequorin (an enzyme that catalyzes the breakdown of luciferin, releasing light), was first purified from the jellyfish Aequorea victoria and its properties studied by Osamu Shimomura. In A. victoria, GFP fluorescence occurs when aequorin interacts with Ca2+ ions, inducing a blue glow. Some of this luminescent energy is transferred to the GFP, shifting the overall color towards green. However, its utility as a tool for molecular biologists did not begin to be realized until 1992 when Douglas Prasher reported the cloning and nucleotide sequence of wtGFP in Gene. The funding for this project had run out, so Prasher sent cDNA samples to several labs. The lab of Martin Chalfie expressed the coding sequence of wtGFP, with the first few amino acids deleted, in heterologous cells of E. coli and C. elegans, publishing the results in Science in 1994. Frederick Tsuji's lab independently reported the expression of the recombinant protein one month later. Remarkably, the GFP molecule folded and was fluorescent at room temperature, without the need for exogenous cofactors specific to the jellyfish. Although this near-wtGFP was fluorescent, it had several drawbacks, including dual peaked excitation spectra, pH sensitivity, chloride sensitivity, poor fluorescence quantum yield, poor photostability and poor folding at . The first reported crystal structure of a GFP was that of the S65T mutant by the Remington group in Science in 1996. One month later, the Phillips group independently reported the wild-type GFP structure in Nature Biotechnology. These crystal structures provided vital background on chromophore formation and neighboring residue interactions. Researchers have modified these residues by directed and random mutagenesis to produce the wide variety of GFP derivatives in use today. Further research into GFP has shown that it is resistant to detergents, proteases, guanidinium chloride (GdmCl) treatments, and drastic temperature changes. GFP derivatives Due to the potential for widespread usage and the evolving needs of researchers, many different mutants of GFP have been engineered. The first major improvement was a single point mutation (S65T) reported in 1995 in Nature by Roger Tsien. This mutation dramatically improved the spectral characteristics of GFP, resulting in increased fluorescence, photostability, and a shift of the major excitation peak to 488 nm, with the peak emission kept at 509 nm. This matched the spectral characteristics of commonly available FITC filter sets, increasing the practicality of use by the general researcher. A 37 °C folding efficiency (F64L) point mutant to this scaffold, yielding enhanced GFP (EGFP), was discovered in 1995 by the laboratories of Thastrup and Falkow. EGFP allowed the practical use of GFPs in mammalian cells. EGFP has an extinction coefficient (denoted ε) of 55,000 M−1cm−1. The fluorescence quantum yield (QY) of EGFP is 0.60. The relative brightness, expressed as ε•QY, is 33,000 M−1cm−1. Superfolder GFP (sfGFP), a series of mutations that allow GFP to rapidly fold and mature even when fused to poorly folding peptides, was reported in 2006. Many other mutations have been made, including color mutants; in particular, blue fluorescent protein (EBFP, EBFP2, Azurite, mKalama1), cyan fluorescent protein (ECFP, Cerulean, CyPet, mTurquoise2), and yellow fluorescent protein derivatives (YFP, Citrine, Venus, YPet). BFP derivatives (except mKalama1) contain the Y66H substitution. They exhibit a broad absorption band in the ultraviolet centered close to 380 nanometers and an emission maximum at 448 nanometers. A green fluorescent protein mutant (BFPms1) that preferentially binds Zn(II) and Cu(II) has been developed. BFPms1 have several important mutations including and the BFP chromophore (Y66H),Y145F for higher quantum yield, H148G for creating a hole into the beta-barrel and several other mutations that increase solubility. Zn(II) binding increases fluorescence intensity, while Cu(II) binding quenches fluorescence and shifts the absorbance maximum from 379 to 444 nm. Therefore, they can be used as Zn biosensor. Chromophore binding. The critical mutation in cyan derivatives is the Y66W substitution, which causes the chromophore to form with an indole rather than phenol component. Several additional compensatory mutations in the surrounding barrel are required to restore brightness to this modified chromophore due to the increased bulk of the indole group. In ECFP and Cerulean, the N-terminal half of the seventh strand exhibits two conformations. These conformations both have a complex set of van der Waals interactions with the chromophore. The Y145A and H148D mutations in Cerulean stabilize these interactions and allow the chromophore to be more planar, better packed, and less prone to collisional quenching. Additional site-directed random mutagenesis in combination with fluorescence lifetime based screening has further stabilized the seventh β-strand resulting in a bright variant, mTurquoise2, with a quantum yield (QY) of 0.93. The red-shifted wavelength of the YFP derivatives is accomplished by the T203Y mutation and is due to π-electron stacking interactions between the substituted tyrosine residue and the chromophore. These two classes of spectral variants are often employed for Förster resonance energy transfer (FRET) experiments. Genetically encoded FRET reporters sensitive to cell signaling molecules, such as calcium or glutamate, protein phosphorylation state, protein complementation, receptor dimerization, and other processes provide highly specific optical readouts of cell activity in real time. Semirational mutagenesis of a number of residues led to pH-sensitive mutants known as pHluorins, and later super-ecliptic pHluorins. By exploiting the rapid change in pH upon synaptic vesicle fusion, pHluorins tagged to synaptobrevin have been used to visualize synaptic activity in neurons. Redox sensitive GFP (roGFP) was engineered by introduction of cysteines into the beta barrel structure. The redox state of the cysteines determines the fluorescent properties of roGFP. Nomenclature The nomenclature of modified GFPs is often confusing due to overlapping mapping of several GFP versions onto a single name. For example, mGFP often refers to a GFP with an N-terminal palmitoylation that causes the GFP to bind to cell membranes. However, the same term is also used to refer to monomeric GFP, which is often achieved by the dimer interface breaking A206K mutation. Wild-type GFP has a weak dimerization tendency at concentrations above 5 mg/mL. mGFP also stands for "modified GFP," which has been optimized through amino acid exchange for stable expression in plant cells. In nature The purpose of both the (primary) bioluminescence (from aequorin's action on luciferin) and the (secondary) fluorescence of GFP in jellyfish is unknown. GFP is co-expressed with aequorin in small granules around the rim of the jellyfish bell. The secondary excitation peak (480 nm) of GFP does absorb some of the blue emission of aequorin, giving the bioluminescence a more green hue. The serine 65 residue of the GFP chromophore is responsible for the dual-peaked excitation spectra of wild-type GFP. It is conserved in all three GFP isoforms originally cloned by Prasher. Nearly all mutations of this residue consolidate the excitation spectra to a single peak at either 395 nm or 480 nm. The precise mechanism of this sensitivity is complex, but, it seems, involves donation of a hydrogen from serine 65 to glutamate 222, which influences chromophore ionization. Since a single mutation can dramatically enhance the 480 nm excitation peak, making GFP a much more efficient partner of aequorin, A. victoria appears to evolutionarily prefer the less-efficient, dual-peaked excitation spectrum. Roger Tsien has speculated that varying hydrostatic pressure with depth may affect serine 65's ability to donate a hydrogen to the chromophore and shift the ratio of the two excitation peaks. Thus, the jellyfish may change the color of its bioluminescence with depth. However, a collapse in the population of jellyfish in Friday Harbor, where GFP was originally discovered, has hampered further study of the role of GFP in the jellyfish's natural environment. Most species of lancelet are known to produce GFP in various regions of their body. Unlike A. victoria, lancelets do not produce their own blue light, and the origin of their endogenous GFP is still unknown. Some speculate that it attracts plankton towards the mouth of the lancelet, serving as a passive hunting mechanism. It may also serve as a photoprotective agent in the larvae, preventing damage caused by high-intensity blue light by converting it into lower-intensity green light. However, these theories have not been tested. GFP-like proteins have been found in multiple species of marine copepods, particularly from the Pontellidae and Aetideidae families. GFP isolated from Pontella mimocerami has shown high levels of brightness with a quantum yield of 0.92, making them nearly two-fold brighter than the commonly used EGFP isolated from A. victoria. Other fluorescent proteins There are many GFP-like proteins that, despite being in the same protein family as GFP, are not directly derived from Aequorea victoria. These include dsRed, eqFP611, Dronpa, TagRFPs, KFP, EosFP/IrisFP, Dendra, and so on. Having been developed from proteins in different organisms, these proteins can sometimes display unanticipated approaches to chromophore formation. Some of these, such as KFP, are developed from naturally non- or weakly-fluorescent proteins to be greatly improved upon by mutagenesis. When GFP-like barrels of different spectra characteristics are used, the excitation spectra of one chromophore can be used to power another chromophore (FRET), allowing for conversion between wavelengths of light. FMN-binding fluorescent proteins (FbFPs) were developed in 2007 and are a class of small (11–16 kDa), oxygen-independent fluorescent proteins that are derived from blue-light receptors. They are intended especially for the use under anaerobic or hypoxic conditions, since the formation and binding of the Flavin chromophore does not require molecular oxygen, as it is the case with the synthesis of the GFP chromophore. Fluorescent proteins with other chromophores, such as UnaG with bilirubin, can display unique properties like red-shifted emission above 600 nm or photoconversion from a green-emitting state to a red-emitting state. They can have excitation and emission wavelengths far enough apart to achieve conversion between red and green light. A new class of fluorescent protein was evolved from a cyanobacterial (Trichodesmium erythraeum) phycobiliprotein, α-allophycocyanin, and named small ultra red fluorescent protein (smURFP) in 2016. smURFP autocatalytically self-incorporates the chromophore biliverdin without the need of an external protein, known as a lyase. Jellyfish- and coral-derived GFP-like proteins require oxygen and produce a stoichiometric amount of hydrogen peroxide upon chromophore formation. smURFP does not require oxygen or produce hydrogen peroxide and uses the chromophore, biliverdin. smURFP has a large extinction coefficient (180,000 M−1 cm−1) and has a modest quantum yield (0.20), which makes it comparable biophysical brightness to eGFP and ~2-fold brighter than most red or far-red fluorescent proteins derived from coral. smURFP spectral properties are similar to the organic dye Cy5. Reviews on new classes of fluorescent proteins and applications can be found in the cited reviews. Structure GFP has a beta barrel structure consisting of eleven β-strands with a pleated sheet arrangement, with an alpha helix containing the covalently bonded chromophore 4-(p-hydroxybenzylidene)imidazolidin-5-one (HBI) running through the center. Five shorter alpha helices form caps on the ends of the structure. The beta barrel structure is a nearly perfect cylinder, 42Å long and 24Å in diameter (some studies have reported a diameter of 30Å), creating what is referred to as a "β-can" formation, which is unique to the GFP-like family. HBI, the spontaneously modified form of the tripeptide Ser65–Tyr66–Gly67, is nonfluorescent in the absence of the properly folded GFP scaffold and exists mainly in the un-ionized phenol form in wtGFP. Inward-facing sidechains of the barrel induce specific cyclization reactions in Ser65–Tyr66–Gly67 that induce ionization of HBI to the phenolate form and chromophore formation. This process of post-translational modification is referred to as maturation. The hydrogen-bonding network and electron-stacking interactions with these sidechains influence the color, intensity and photostability of GFP and its numerous derivatives. The tightly packed nature of the barrel excludes solvent molecules, protecting the chromophore fluorescence from quenching by water. In addition to the auto-cyclization of the Ser65-Tyr66-Gly67, a 1,2-dehydrogenation reaction occurs at the Tyr66 residue. Besides the three residues that form the chromophore, residues such as Gln94, Arg96, His148, Thr203, and Glu222 all act as stabilizers. The residues of Gln94, Arg96, and His148 are able to stabilize by delocalizing the chromophore charge. Arg96 is the most important stabilizing residue due to the fact that it prompts the necessary structural realignments that are necessary from the HBI ring to occur. Any mutation to the Arg96 residue would result in a decrease in the development rate of the chromophore because proper electrostatic and steric interactions would be lost. Tyr66 is the recipient of hydrogen bonds and does not ionize in order to produce favorable electrostatics. Autocatalytic formation of the chromophore in wtGFP Mechanistically, the process involves base-mediated cyclization followed by dehydration and oxidation. In the reaction of 7a to 8 involves the formation of an enamine from the imine, while in the reaction of 7b to 9 a proton is abstracted. The formed HBI fluorophore is highlighted in green. The reactions are catalyzed by residues Glu222 and Arg96. An analogous mechanism is also possible with threonine in place of Ser65. Applications Reporter assays Green fluorescent protein may be used as a reporter gene. For example, GFP can be used as a reporter for environmental toxicity levels. This protein has been shown to be an effective way to measure the toxicity levels of various chemicals including ethanol, p-formaldehyde, phenol, triclosan, and paraben. GFP is great as a reporter protein because it has no effect on the host when introduced to the host's cellular environment. Due to this ability, no external visualization stain, ATP, or cofactors are needed. With regards to pollutant levels, the fluorescence was measured in order to gauge the effect that the pollutants have on the host cell. The cellular density of the host cell was also measured. Results from the study conducted by Song, Kim, & Seo (2016) showed that there was a decrease in both fluorescence and cellular density as pollutant levels increased. This was indicative of the fact that cellular activity had decreased. More research into this specific application in order to determine the mechanism by which GFP acts as a pollutant marker. Similar results have been observed in zebrafish because zebrafish that were injected with GFP were approximately twenty times more susceptible to recognize cellular stresses than zebrafish that were not injected with GFP. Advantages The biggest advantage of GFP is that it can be heritable, depending on how it was introduced, allowing for continued study of cells and tissues it is expressed in. Visualizing GFP is noninvasive, requiring only illumination with blue light. GFP alone does not interfere with biological processes, but when fused to proteins of interest, careful design of linkers is required to maintain the function of the protein of interest. Moreover, if used with a monomer it is able to diffuse readily throughout cells. Fluorescence microscopy The availability of GFP and its derivatives has thoroughly redefined fluorescence microscopy and the way it is used in cell biology and other biological disciplines. While most small fluorescent molecules such as FITC (fluorescein isothiocyanate) are strongly phototoxic when used in live cells, fluorescent proteins such as GFP are usually much less harmful when illuminated in living cells. This has triggered the development of highly automated live-cell fluorescence microscopy systems, which can be used to observe cells over time expressing one or more proteins tagged with fluorescent proteins. There are many techniques to utilize GFP in a live cell imaging experiment. The most direct way of utilizing GFP is to directly attach it to a protein of interest. For example, GFP can be included in a plasmid expressing other genes to indicate a successful transfection of a gene of interest. Another method is to use a GFP that contains a mutation where the fluorescence will change from green to yellow over time, which is referred to as a fluorescent timer. With the fluorescent timer, researchers can study the state of protein production such as recently activated, continuously activated, or recently deactivated based on the color reported by the fluorescent protein. In yet another example, scientists have modified GFP to become active only after exposure to irradiation giving researchers a tool to selectively activate certain portions of a cell and observe where proteins tagged with the GFP move from the starting location. These are only two examples in a burgeoning field of fluorescent microcopy and a more complete review of biosensors utilizing GFP and other fluorescent proteins can be found here For example, GFP had been widely used in labelling the spermatozoa of various organisms for identification purposes as in Drosophila melanogaster, where expression of GFP can be used as a marker for a particular characteristic. GFP can also be expressed in different structures enabling morphological distinction. In such cases, the gene for the production of GFP is incorporated into the genome of the organism in the region of the DNA that codes for the target proteins and that is controlled by the same regulatory sequence; that is, the gene's regulatory sequence now controls the production of GFP, in addition to the tagged protein(s). In cells where the gene is expressed, and the tagged proteins are produced, GFP is produced at the same time. Thus, only those cells in which the tagged gene is expressed, or the target proteins are produced, will fluoresce when observed under fluorescence microscopy. Analysis of such time lapse movies has redefined the understanding of many biological processes including protein folding, protein transport, and RNA dynamics, which in the past had been studied using fixed (i.e., dead) material. Obtained data are also used to calibrate mathematical models of intracellular systems and to estimate rates of gene expression. Similarly, GFP can be used as an indicator of protein expression in heterologous systems. In this scenario, fusion proteins containing GFP are introduced indirectly, using RNA of the construct, or directly, with the tagged protein itself. This method is useful for studying structural and functional characteristics of the tagged protein on a macromolecular or single-molecule scale with fluorescence microscopy. The Vertico SMI microscope using the SPDM Phymod technology uses the so-called "reversible photobleaching" effect of fluorescent dyes like GFP and its derivatives to localize them as single molecules in an optical resolution of 10 nm. This can also be performed as a co-localization of two GFP derivatives (2CLM). Another powerful use of GFP is to express the protein in small sets of specific cells. This allows researchers to optically detect specific types of cells in vitro (in a dish), or even in vivo (in the living organism). GFP is considered to be a reliable reporter of gene expression in eukaryotic cells when the fluorescence is measured by flow cytometry. Genetically combining several spectral variants of GFP is a useful trick for the analysis of brain circuitry (Brainbow). Other interesting uses of fluorescent proteins in the literature include using FPs as sensors of neuron membrane potential, tracking of AMPA receptors on cell membranes, viral entry and the infection of individual influenza viruses and lentiviral viruses, etc. It has also been found that new lines of transgenic GFP rats can be relevant for gene therapy as well as regenerative medicine. By using "high-expresser" GFP, transgenic rats display high expression in most tissues, and many cells that have not been characterized or have been only poorly characterized in previous GFP-transgenic rats. GFP has been shown to be useful in cryobiology as a viability assay. Correlation of viability as measured by trypan blue assays were 0.97. Another application is the use of GFP co-transfection as internal control for transfection efficiency in mammalian cells. A novel possible use of GFP includes using it as a sensitive monitor of intracellular processes via an eGFP laser system made out of a human embryonic kidney cell line. The first engineered living laser is made by an eGFP expressing cell inside a reflective optical cavity and hitting it with pulses of blue light. At a certain pulse threshold, the eGFP's optical output becomes brighter and completely uniform in color of pure green with a wavelength of 516 nm. Before being emitted as laser light, the light bounces back and forth within the resonator cavity and passes the cell numerous times. By studying the changes in optical activity, researchers may better understand cellular processes. GFP is used widely in cancer research to label and track cancer cells. GFP-labelled cancer cells have been used to model metastasis, the process by which cancer cells spread to distant organs. Split GFP GFP can be used to analyse the colocalization of proteins. This is achieved by "splitting" the protein into two fragments which are able to self-assemble, and then fusing each of these to the two proteins of interest. Alone, these incomplete GFP fragments are unable to fluoresce. However, if the two proteins of interest colocalize, then the two GFP fragments assemble together to form a GFP-like structure which is able to fluoresce. Therefore, by measuring the level of fluorescence it is possible to determine whether the two proteins of interest colocalize. Macro-photography Macro-scale biological processes, such as the spread of virus infections, can be followed using GFP labeling. In the past, mutagenic ultra violet light (UV) has been used to illuminate living organisms (e.g., see) to detect and photograph the GFP expression. Recently, a technique using non-mutagenic LED lights have been developed for macro-photography. The technique uses an epifluorescence camera attachment based on the same principle used in the construction of epifluorescence microscopes. Transgenic pets Alba, a green-fluorescent rabbit, was created by a French laboratory commissioned by Eduardo Kac using GFP for purposes of art and social commentary. The US company Yorktown Technologies markets to aquarium shops green fluorescent zebrafish (GloFish) that were initially developed to detect pollution in waterways. NeonPets, a US-based company has marketed green fluorescent mice to the pet industry as NeonMice. Green fluorescent pigs, known as Noels, were bred by a group of researchers led by Wu Shinn-Chih at the Department of Animal Science and Technology at National Taiwan University. A Japanese-American Team created green-fluorescent cats as proof of concept to use them potentially as model organisms for diseases, particularly HIV. In 2009 a South Korean team from Seoul National University bred the first transgenic beagles with fibroblast cells from sea anemones. The dogs give off a red fluorescent light, and they are meant to allow scientists to study the genes that cause human diseases like narcolepsy and blindness. Art Julian Voss-Andreae, a German-born artist specializing in "protein sculptures," created sculptures based on the structure of GFP, including the tall "Green Fluorescent Protein" (2004) and the tall "Steel Jellyfish" (2006). The latter sculpture is located at the place of GFP's discovery by Shimomura in 1962, the University of Washington's Friday Harbor Laboratories. See also Protein tag pGLO Yellow fluorescent protein Genetically encoded voltage indicator References Further reading Popular science book describing history and discovery of GFP External links A comprehensive article on fluorescent proteins at Scholarpedia Brief summary of landmark GFP papers Interactive Java applet demonstrating the chemistry behind the formation of the GFP chromophore Video of 2008 Nobel Prize lecture of Roger Tsien on fluorescent proteins Excitation and emission spectra for various fluorescent proteins Green Fluorescent Protein Chem Soc Rev themed issue dedicated to the 2008 Nobel Prize winners in Chemistry, Professors Osamu Shimomura, Martin Chalfie and Roger Y. Tsien Molecule of the Month, June 2003: an illustrated overview of GFP by David Goodsell. Molecule of the Month, June 2014: an illustrated overview of GFP-like variants by David Goodsell. Green Fluorescent Protein on FPbase, a fluorescent protein database Protein methods Recombinant proteins Cell imaging Protein imaging Bioluminescence Cnidarian proteins
Green fluorescent protein
Chemistry,Biology
6,146
39,305,846
https://en.wikipedia.org/wiki/Spherical%20variety
In algebraic geometry, given a reductive algebraic group G and a Borel subgroup B, a spherical variety is a G-variety with an open dense B-orbit. It is sometimes also assumed to be normal. Examples are flag varieties, symmetric spaces and (affine or projective) toric varieties. There is also a notion of real spherical varieties. A projective spherical variety is a Mori dream space. Spherical embeddings are classified by so-called colored fans, a generalization of fans for toric varieties; this is known as Luna-Vust Theory. In his seminal paper, developed a framework to classify complex spherical subgroups of reductive groups; he reduced the classification of spherical subgroups to wonderful subgroups. He further worked out the case of groups of type A and conjectured that combinatorial objects consisting of "homogeneous spherical data" classify spherical subgroups. This is known as the Luna Conjecture. This classification is now complete according to Luna's program; see contributions of Bravi, Cupit-Foutou, Losev and Pezzini. As conjectured by Knop, every "smooth" affine spherical variety is uniquely determined by its weight monoid. This uniqueness result was proven by Losev. has been developing a program to classify spherical varieties in arbitrary characteristic. References Paolo Bravi, Wonderful varieties of type E, Representation theory 11 (2007), 174–191. Paolo Bravi and Stéphanie Cupit-Foutou, Classification of strict wonderful varieties, Annales de l'Institut Fourier (2010), Volume 60, Issue 2, 641–681. Paolo Bravi and Guido Pezzini, Wonderful varieties of type D, Representation theory 9 (2005), pp. 578–637. Paolo Bravi and Guido Pezzini, Wonderful subgroups of reductive groups and spherical systems, J. Algebra 409 (2014), 101–147. Paolo Bravi and Guido Pezzini, The spherical systems of the wonderful reductive subgroups, J. Lie Theory 25 (2015), 105–123. Paolo Bravi and Guido Pezzini, Primitive wonderful varieties, Arxiv 1106.3187. Stéphanie Cupit-Foutou, Wonderful Varieties. a geometrical realization, Arxiv 0907.2852. Michel Brion, "Introduction to actions of algebraic groups" Algebraic geometry
Spherical variety
Mathematics
492
73,059,571
https://en.wikipedia.org/wiki/Hard%20Rock%20%28exercise%29
Hard Rock (sometimes Operation Hard Rock or the Hard Rock exercise) was a British civil defence exercise planned by the Conservative government to take place in September–October 1982. One of a series of regular national civil defence exercises, it followed Square Leg in 1980. As the public reaction to the scale of devastation forecast in Square Leg was poor, the planners deliberately scaled down the number of warheads supposed for Hard Rock. Despite this, the Campaign for Nuclear Disarmament (CND), who opposed nuclear warfare and were against civil defence exercises, suggested that such an attack as Hard Rock anticipated would have led to the deaths of 12.5 million people. Since 1980, many British local authorities, who played key roles in civil defence planning, had become nuclear-free zones, opposed to nuclear weapons and nuclear power. Many of these authorities refused to take part in Hard Rock, although finance and the unofficial policy of the Labour Party also played a part. By July, twenty local authorities, all Labour-run, had indicated their refusal to take part and seven more would take part in only a limited manner. Hard Rock was postponed indefinitely, effectively cancelled. In response, the government passed the Civil Defence (General Local Authority Functions) Regulations 1983, compelling local authorities to take part in civil defence exercises. Planning During the Cold War, the British government carried out a number of civil defence exercises to test the country's preparedness for the effects of war. Since 1949 this had included planning for an attack on the UK with nuclear weapons. From the mid-1970s national civil defence exercises, including a nuclear attack, had been run every 2–3 years. Hard Rock was scheduled to be run in September–October 1982 and would have been the largest civil defence exercise for 15 years. Planning for Hard Rock was started by the National Council for Civil Defence in 1980. It was the first British civil defence exercise to not be planned entirely by the military. The involvement of local authorities in the exercise came via the Civil Defence (Planning) Regulations 1974 which required them to maintain contingency plans for civil defence in their areas. The attack scenario for the previous simulation, the 1980 Square Leg exercise, had been leaked to the press and the Campaign for Nuclear Disarmament (CND). This scenario envisaged more than 200 megatons of nuclear weapons being detonated on the country, a quantity in line with other British civil defence exercises conducted in the 1970s and advice in the Home Office's training manual for scientific advisors, issued in 1977. The public response to the projected high casualty rates and widespread destruction had been poor so the Hard Rock planners deliberately assumed an attack scenario with fewer weapons, totalling less than 50 megatons, and avoiding strikes on some obvious targets such as American air bases. Journalist and writer on Cold War military secrets Duncan Campbell noted that no missiles were assumed to be targeted at London, Manchester, Edinburgh, Liverpool, Bristol or Cardiff and those targeted at other cities were presumed to miss; in addition, the US and British submarine bases at Holy Loch and Faslane and the main British and North Atlantic Treaty Organization (NATO) military control centres were also assumed not to be targets. Campbell lists 54 targets in the final exercise scenario, down from 105 in a June 1981 plan. He suggested that the Home Office and Ministry of Defence had removed "politically undesirable" targets from the scenario. He notes other political decisions affected the exercise: the scale of refugee movements was toned down and references to civil disorder were kept vague. The issuing of the Protect and Survive booklet to the public was to be one of the steps taken in the exercise but its brand had become so embarrassing to the government that it was referred to as the "Public, Do-It-Yourself Civil Defence" booklet. Even with fewer nuclear weapons, the Hard Rock exercise projected large-scale damage and loss of life. The CND estimated that an attack of 50 megatons would result in the deaths of 12.5 million people, while an attack with 220 megatons would lead to the deaths of 39 million, some 72 per cent of the population at the time. The CND publicised these estimates under the title of "Hard Luck". Campbell, using a model by Philip Steadman of the Open University and Stan Openshaw of Newcastle University, forecast 12 million deaths or serious injuries, of which 2 million were forecast to come from the bombing and 5 million from the effects of fallout. Exercise According to Campbell, the exercise would have begun with a simulated transition to war period from 19 September, during which the officials in their bunkers would be given simulated daily briefings and news bulletins, including simulations of panic buying and fuel shortages. A war with the Soviet Union was to have broken out at 4:30am on 27 September with an invasion of West Germany and conventional air raids on the UK. During this time, the military had a limited role to play in the exercise, simulating reconnaissance flights over nuclear target areas and practising moving ships in and out of ports scheduled to remain unaffected. At 8:00pm on 2 October, the exercise forecast the nuclear strike would begin, continuing into the following morning. The exercise would run for a simulated one-month post-attack period. Exercise Hard Rock would have focused on the response by local authorities in the aftermath of the attack. It envisaged the abandonment of irradiated cities, where fires would be left to burn uncontrolled, and the breakdown of society into lawlessness. The exercise included decisions made to triage casualties by likelihood of survival, with those affected by severe radiation sickness left to die without food or treatment, and the prioritisation of resources on those healthy adults with skills necessary to keep remaining infrastructure working. Campbell records that one of the measures taken in the exercise was the release of all bar 1,000 civilian prisoners. Opposition and local authority refusals The running of the Hard Rock exercise was opposed by the CND, who said it made no sense to run an exercise on post-attack response if the government's position was that the nuclear deterrent was effective. The pacifist Peace Pledge Union opposed the exercise on the grounds that it "normalised militarism". In 1980, Manchester City Council declared itself a nuclear-free zone, proclaiming nuclear weapons and nuclear power unwelcome within its boundaries. It was the first British local authority to do so but by 1982, 143 authorities had joined it. Many of these authorities refused to participate in Hard Rock on principle, though other concerns such as cost were also a factor. Refusals were largely in local authorities controlled by the Labour Party, with some using it as an opportunity to demonstrate their opposition to the defence policies of the Conservative government. The Labour Party National Executive Committee (NEC) had, in June 1981, advised local authorities to "refuse to co-operate with all but the bare legal minimum necessary under the 1974 Civil Defence (Planning) exercises and arrangements which are concerned with nuclear weapons and nuclear war preparations". The CND offered support to any authorities that decided not to participate and, with the Scientists Against Nuclear Arms organisation and the unofficial support of the Labour NEC, produced a pack of information outlining its views on the exercise. As part of its opposition to Hard Rock, the Greater London Council invited the public to view its secret nuclear bunkers, intended for post-attack command by civil defence personnel, with 4,800 people visiting them in six days. Had the exercise gone ahead protestors planned to establish peace camps outside the regional seats of government from which civil defence operations would be co-ordinated. The protestors planned to impede access to the facilities and to identify personnel who attended the exercises. By July 1982, 19 county councils and the Greater London Council, out of the 54 local authorities which had been asked to participate, had confirmed their refusal. During Prime Minister's Questions on 15 July, the prime minister, Margaret Thatcher, stated that all 20 authorities that refused were Labour-run. A further seven local authorities stated that they could comply in only a limited manner. By September, the Home Secretary Willie Whitelaw had announced that the exercise had been indefinitely postponed, though it was, in effect, cancelled. Legacy Despite other factors such as politics and finance, the CND believed the cancellation of Hard Rock was primarily because of its campaign. The CND's Scottish secretary Ian Davison called it the organisation's first major victory and Peter Byrd, a writer on the history of the peace movement, described the cancellation as "probably [its] biggest single success". The government blamed the Labour NEC for the cancellation. In the wake of the cancellation of Hard Rock, Michael Heseltine, secretary of state for defence, established Defence Secretariat 19 within the ministry to better explain to the public the government's policy on nuclear deterrence and multilateral disarmament. The government introduced the Civil Defence (General Local Authority Functions) Regulations in 1983. These compelled local authorities to support national civil defence exercises and imposed financial penalties for them and individual councillors if they did not comply. The regulations also gave the government the power to appoint special commissioners to run the exercises within the local authorities. The civil service proposed running Hard Rock again after the Conservative victory in the 1983 general election but it was never run. That same year, Labour peer Willie Ross, Baron Ross of Marnock claimed, in the House of Lords, that the Ministry of Defence was glad the exercise had been cancelled as it would have showed widespread inadequacies in local civil defence planning. A Home Office investigation in 1988 found a widespread refusal by local authorities to implement the Civil Defence (General Local Authority Functions) Regulations 1983. From 1986 British civil defence planning transitioned away from large national exercises to the Planned Programme of Implementation (PPI) at a local level. PPI shifted the focus of civil defence preparations from nuclear war to an "all hazards" approach for a variety of civil emergencies. Map of nuclear warhead targets The following map indicates, according to Duncan Campbell, the targets for nuclear warheads supposed in Hard Rock. Air burst detonations are shown with blue markers, ground burst detonations with red markers. An asterisk (*) denotes a near miss which might be off target. References Nuclear warfare United Kingdom nuclear command and control Cold War history of the United Kingdom 1982 in the United Kingdom
Hard Rock (exercise)
Chemistry
2,098
408,116
https://en.wikipedia.org/wiki/R%C3%A9union%20ibis
The Réunion ibis or Réunion sacred ibis (Threskiornis solitarius) is an extinct species of ibis that was endemic to the volcanic island of Réunion in the Indian Ocean. The first subfossil remains were found in 1974, and the ibis was first scientifically described in 1987. Its closest relatives are the Malagasy sacred ibis, the African sacred ibis, and the straw-necked ibis. Travellers' accounts from the 17th and 18th centuries described a white bird on Réunion that flew with difficulty and preferred solitude, which was subsequently referred to as the "Réunion solitaire". In the mid 19th century, the old travellers' accounts were incorrectly assumed to refer to white relatives of the dodo, due to one account specifically mentioning dodos on the island, and because 17th-century paintings of white dodos had recently surfaced. However, no fossils referable to dodo-like birds were ever found on Réunion, and it was later questioned whether the paintings had anything to do with the island. Other identities were suggested as well, based only on speculations. In the late 20th century, the discovery of ibis subfossils led to the idea that the old accounts actually referred to an ibis species instead. The idea that the "solitaire" and the subfossil ibis are identical was met with limited dissent, but is now widely accepted. Combined, the old descriptions and subfossils show that the Réunion ibis was mainly white, with this colour merging into yellow and grey. The wing tips and plumes of ostrich-like feathers on its rear were black. The neck and legs were long, and the beak was relatively straight and short for an ibis. It was more robust in build than its extant relatives, but was otherwise quite similar to them. It would have been no longer than 65 cm (25 in) in length. Subfossil wingbones indicate it had reduced flight capabilities, a feature perhaps linked to seasonal fattening. The diet of the Réunion ibis was worms and other items foraged from the soil. In the 17th century, it lived in mountainous areas, but it may have been confined to these remote heights by heavy hunting by humans and predation by introduced animals in the more accessible areas of the island. Visitors to Réunion praised its flavour, and therefore sought after its flesh. These factors are believed to have driven the Réunion ibis to extinction by the early 18th century. Taxonomy The taxonomic history of the Réunion ibis is convoluted and complex, due to the ambiguous and meagre evidence that was available to scientists until the late 20th century. The supposed "white dodo" of Réunion is now believed to have been an erroneous conjecture based on the few contemporary reports which described the Réunion ibis, combined with paintings of white dodos from Mauritius by the Dutch painters Pieter Withoos and Pieter Holsteyn II (and derivatives) from the 17th century that surfaced in the 19th century. The English Chief Officer John Tatton was the first to mention a specifically white bird on Réunion, in 1625. The French occupied the island from 1646 and onwards, and referred to this bird as the "solitaire". M. Carré of the French East India Company described the "solitaire" in 1699, explaining the reason for its name: The marooned French Huguenot François Leguat used the name "solitaire" for the Rodrigues solitaire, a Raphine bird (related to the dodo) he encountered on the nearby island of Rodrigues in the 1690s, but it is thought he borrowed the name from a 1689 tract by Marquis Henri Duquesne which mentioned the Réunion species. Duquesne himself had probably based his own description on an earlier one. No specimens of the "solitaire" were ever preserved. The two individuals Carré attempted to send to the royal menagerie in France did not survive in captivity. Billiard claimed that the French administrator Bertrand-François Mahé de La Bourdonnais sent a "solitaire" to France from Réunion around 1740. Since the Réunion ibis is believed to have gone extinct by this date, the bird may actually have been a Rodrigues solitaire. The only contemporary writer who referred specifically to "dodos" inhabiting Réunion was the Dutch sailor Willem Ysbrandtszoon Bontekoe, though he did not mention their colouration: When his journal was published in 1646, it was accompanied by an engraving which is now known to have been copied after one of the dodos in the Flemish painter Roelant Savery's "Crocker Art Gallery sketch". Since Bontekoe was shipwrecked and lost all his belongings after visiting Réunion in 1619, he may not have written his account until he returned to Holland, seven years later, which would put its reliability in question. He may have concluded in hindsight that it was a dodo, finding what he saw similar to accounts of that bird. Early interpretation In the 1770s, the French naturalist Comte de Buffon stated that the dodo inhabited both Mauritius and Réunion for unclear reasons. He also combined accounts about the Rodrigues solitaire and a bird from Mauritius ("oiseau de Nazareth", now thought to be a dodo), as well as the "solitaire" Carré reported from Réunion under one "solitaire" section, indicating he believed there was both a dodo and "solitaire" on Réunion. The English naturalist Hugh Edwin Strickland discussed the old descriptions of the "solitaire" in his 1848 book The Dodo and Its Kindred, and concluded it was distinct from the dodo and Rodrigues solitaire due to its colouration. The Belgian scientist Edmond de Sélys Longchamps coined the scientific name Apterornis solitarius for the "solitaire" in 1848, apparently making it the type species of the genus, in which he also included two other Mascarene birds only known from contemporary accounts, the red rail and the Réunion swamphen. As the name Apterornis had already been used for a different bird by the English biologist Richard Owen, and the other former names were likewise invalid, Bonaparte coined the new binomial Ornithaptera borbonica in 1854 (Bourbon was the original French name for Réunion). In 1854, the German ornithologist Hermann Schlegel placed the "solitaire" in the same genus as the dodo, and named it Didus apterornis. He restored it strictly according to contemporary accounts, which resulted in an ibis or stork-like bird instead of a dodo. In 1856, William Coker announced the discovery of a 17th-century "Persian" painting of a white dodo among waterfowl, which he had been shown in England. The artist was later identified as Pieter Withoos, and many prominent 19th-century naturalists subsequently assumed the image depicted the white "solitaire" of Réunion, a possibility originally proposed by ornithologist John Gould. Simultaneously, several similar paintings of white dodos by Pieter Holsteyn II were discovered in the Netherlands. Other paintings and drawings were also later identified as showing white dodos. In 1869, the English ornithologist Alfred Newton argued that the Withoos' painting and engraving in Bontekoe's memoir depicted a living Réunion dodo that had been brought to Holland, while explaining its blunt beak as a result of beak trimming to prevent it from injuring humans. He also brushed aside the inconsistencies between the illustrations and descriptions, especially the long, thin beak implied by one contemporary account. Newton's words particularly cemented the validity of this connection among contemporary peers, and several of them expanded on his views. The Dutch zoologist Anthonie Cornelis Oudemans suggested in 1917 that the discrepancies between the paintings and the old descriptions were due to the paintings showing a female, and that the species was, therefore, sexually dimorphic. The British zoologist Walter Rothschild claimed in 1907 that the yellow wings might have been due to albinism in this particular specimen, since the old descriptions described these as black. By the early 20th century, many other paintings and even physical remains were claimed to be of white dodos, amid much speculation. Rothschild commissioned British artist Frederick William Frohawk to restore the "solitaire" as both a white dodo, based on the Withoos painting, and as a distinct bird based on the French traveller Sieur Dubois' 1674 description, for his 1907 book Extinct Birds. In 1937, the Japanese writer Masauji Hachisuka suggested that the old accounts and paintings represented two different species, and referred to the white dodos of the paintings as Victoriornis imperialis (honouring King Victor Emmanuel III of Italy), and the "solitaire" of the accounts as Ornithaptera solitarius (using the generic name coined by Bonaparte). Hachisuka also suggested that a 1618 Italian illustration previously identified as a dodo being hunted, actually showed a male, brown Réunion solitaire (he ruled out Rodrigues because that island was not yet inhabited at the time). To him, this cleared up the confusion between the two species, which is why he named the white dodo for the King of Italy (the illustration being from Italy). Today the illustration is thought to depict an ostrich or a bustard. Modern interpretation Until the late 1980s, belief in the existence of a white dodo on Réunion was the orthodox view, and only a few researchers doubted the connection between the "solitaire" accounts and the dodo paintings. The American ornithologist James Greenway cautioned in 1958 that no conclusions could be made without solid evidence such as fossils, and that nothing indicated that the white dodos in the paintings had anything to do with Réunion. In 1970, the American ornithologist Robert W. Storer predicted that if any such remains were found, they would not belong to Raphinae like the dodo and Rodrigues solitaire (or even to the pigeon family like them). The first subfossil bird remains on Réunion, the lower part of a tarsometatarsus, was found in 1974, and considered a new species of stork in the genus Ciconia by the British ornithologist Graham S. Cowles in 1987. The remains were found in a cave, which indicated it had been brought there and eaten by early settlers. It was speculated that the remains could have belonged to a large, mysterious bird described by Leguat, and called "Leguat's giant" by some ornithologists. "Leguat's giant" is now thought to be based on a locally extinct population of greater flamingos. Also in 1987, a subfossil tarsometatarsus of an ibis found in a cave was described as Borbonibis latipes (the specific name means "wide foot") by the French palaeontologists Cécile Mourer-Chauviré and François Moutou, and thought related to the bald ibises of the genus Geronticus. In 1994, Cowles concluded that the "stork" remains he had reported belonged to Borbonibis, since their tarsometatarsi were similar. The 1987 discovery led the English biologist Anthony S. Cheke to suggest to one of the describers of Borbonibis that the subfossils may have been of the "solitaire". In 1995, the French ecologist Jean-Michel Probst reported his discovery of a bird mandible during an excavation on Réunion the former year, and suggested it may have belonged to the ibis or the "solitaire". In 1995, the describers of Borbonibis latipes suggested that it represented the "Réunion solitaire", and reassigned it to the ibis genus Threskiornis, now combined with the specific name from de Sélys-Longchamps' 1848 binomial for the "solitaire" (making Borbonibis latipes a junior synonym). The authors pointed out that the contemporary descriptions matched the appearance and behaviour of an ibis more than a member of the Raphinae, especially due to its comparatively short and straight mandible, and because ibis remains were abundant in some localities; it would be strange if contemporary writers never mentioned such a relatively common bird, whereas they mentioned most other species subsequently known from fossils. The possible origin of the 17th-century white dodo paintings was examined, by the Spanish biologist Arturo Valledor de Lozoya in 2003, and independently by experts of Mascarene fauna Cheke and Julian Hume in 2004. The Withoos and Holsteyn paintings are clearly derived from each other, and Withoos likely copied his dodo from one of Holsteyn's works, since these were probably produced at an earlier date. All later white dodo pictures are thought to be based on these paintings. According to the aforementioned writers, it appears these pictures were themselves derived from a whitish dodo in a previously unreported painting called Landscape with Orpheus and the Animals, produced by Roelant Savery c. 1611. The dodo was apparently based on a stuffed specimen then in Prague; a walghvogel (old Dutch for dodo) described as having a "dirty off-white colouring" was mentioned in an inventory of specimens in the Prague collection of the Holy Roman Emperor Rudolf II to whom Savery was contracted at the time (1607–1611). Savery's several later dodo images all show greyish birds, possibly because he had by then seen a normal specimen. Cheke and Hume concluded the painted specimen was white due to albinism, and that this peculiar feature was the reason it was collected from Mauritius and brought to Europe. Valledor de Lozoya instead suggested that the light plumage was a juvenile trait, a result of bleaching of old taxidermy specimens, or simply due to artistic license. In 2018, the British ornithologist Jolyon C. Parish and Cheke suggested that the painting was instead executed after 1614, or even after 1626, based on some of the motifs. While many subfossil elements from throughout the skeleton have been assigned to the Réunion ibis, no remains of dodo-like birds have ever been found on Réunion. A few later sources have taken issue with the proposed ibis-identity of the "solitaire", and have even regarded the "white dodo" as a valid species. The British writer Errol Fuller agrees that the 17th-century paintings do not depict Réunion birds, but has questioned whether the ibis subfossils are necessarily connected to the "solitaire" accounts. He notes that no evidence indicates the extinct ibis survived until the time Europeans reached Réunion. Cheke and Hume have dismissed such sentiments as being mere "belief" and "hope" in the existence of a dodo on the island. Evolution The volcanic island of Réunion is only three million years old, whereas Mauritius and Rodrigues, with each of their flightless Raphine species, are eight to ten million years old, and according to Cheke and Hume it is unlikely that either bird would have been capable of flying after five or more million years of adapting to the islands. Therefore, it is unlikely that Réunion could have been colonised by flightless birds from these islands, and only flighted species on the island have relatives there. Three million years is enough time for flightless and weak flying abilities to have evolved in bird species on Réunion itself, but Mourer-Chauviré and colleagues pointed out that such species would have been wiped out by the eruption of the volcano Piton des Neiges between 300,000 and 180,000 years ago. Most recent species would therefore likely be descendants of animals which had recolonised the island from Africa or Madagascar after this event, which is not enough time for a bird to become flightless. In 1995, a morphological study by Mourer-Chauviré and colleagues suggested the closest extant relatives of the Réunion ibis are the African sacred ibis (T. aethiopicus) of Africa and the straw-necked ibis (T. spinicollis) of Australia. Cheke and Hume instead suggested that it was closest to the Malagasy sacred ibis (T. bernieri), and therefore of ultimately African origin. Description Contemporary accounts described the species as having white and grey plumage merging into yellow, black wing tips and tail feathers, a long neck and legs, and limited flight capabilities. Dubois' 1674 account is the most detailed contemporary description of the bird, here as translated by Strickland in 1848: According to Mourer-Chauviré and colleagues, the plumage colouration mentioned is similar to that of the related African sacred ibis and straw-necked ibis, which are also mainly white and glossy black. In the reproductive season, the ornamental feathers on the back and wing tips of the African sacred ibis look similar to the feathers of an ostrich, which echoes Dubois' description. Likewise, a subfossil lower jaw found in 1994 showed that the bill of the Réunion ibis was relatively short and straight for an ibis, which corresponds with Dubois' woodcock comparison. Cheke and Hume have suggested that the French word (bécasse) from Dubois' original description, usually translated to "woodcock", could also mean oystercatcher, another bird with a long, straight, but slightly more robust, bill. They have also pointed out that the last sentence is mistranslated, and actually means the bird could be caught by running after it. The bright colouration of the plumage mentioned by some authors may refer to iridescence, as seen in the straw-necked ibis. Subfossils of the Réunion ibis show that it was more robust, likely much heavier, and had a larger head than the African sacred and straw-necked ibises. It was nonetheless similar to them in most features. According to Hume, it would have been no longer than 65 cm (25 in) in length, the size of the African sacred ibis. Rough protuberances on the wing bones of the Réunion ibis are similar to those of birds that use their wings in combat. It was perhaps flightless, but this has not left significant osteological traces; no complete skeletons have been collected, but of the known pectoral elements, only one feature indicates reduction in flight capability. The coracoid is elongated and the radius and ulna are robust, as in flighted birds, but a particular foramen (or opening) between a metacarpal and the alular is otherwise only known from flightless birds, such as some ratites, penguins, and several extinct species. Behaviour and ecology As contemporary accounts are inconsistent on whether the "solitaire" was flightless or had some flight capability, Mourer-Chauvire and colleagues suggested that this was dependent on seasonal fat-cycles, meaning that individuals fattened themselves during cool seasons, but were slim during hot seasons; perhaps it could not fly when it was fat, but could when it was not. However, Dubois specifically stated the "solitaires" did not have fat-cycles, unlike most other Réunion birds. The only mention of its diet and exact habitat is the account of the French cartographer Jean Feuilley from 1708, which is also the last record of a living individual: The diet and mode of foraging described by Feuilley matches that of an ibis, whereas members of the Raphinae are known to have been fruit eaters. The species was termed a land-bird by Dubois, so it did not live in typical ibis habitats such as wetlands. This is similar to the Réunion swamphen, which lived in forest rather than swamps, which is otherwise typical swamphen habitat. Cheke and Hume proposed that the ancestors of these birds colonised Réunion before swamps had developed, and had therefore become adapted to the available habitats. They were perhaps prevented from colonising Mauritius as well due to the presence of red rails there, which may have occupied a similar niche. The Réunion ibis appears to have lived in high altitudes, and perhaps had a limited distribution. Accounts by early visitors indicate the species was found near their landing sites, but they were found only in remote places by 1667. The bird may have survived in eastern lowlands until the 1670s. Though many late 17th century accounts state the bird was good food, Feuilley stated it tasted bad. This may be because it changed its diet when it moved to more rugged, higher terrain, to escape pigs that destroyed its nests; since it had limited flight capabilities, it probably nested on the ground. Many other endemic species of Réunion became extinct after the human colonisation and the resulting disruption of the island's ecosystem. The Réunion ibis lived alongside other recently extinct birds such as the hoopoe starling, the Mascarene parrot, the Réunion parakeet, the Réunion swamphen, the Réunion scops owl, the Réunion night heron, and the Réunion pink pigeon. Extinct reptiles include the Réunion giant tortoise and an undescribed Leiolopisma skink. The small Mauritian flying fox and the snail Tropidophora carinata lived on Réunion and Mauritius, but vanished from both islands. Extinction As Réunion was populated by settlers, the Réunion ibis appears to have become confined to the tops of mountains. Introduced predators such as cats and rats took a toll. Overhunting also contributed and several contemporary accounts state the bird was widely hunted for food. In 1625, John Tatton described the tameness of the bird and how easy it was to hunt, as well as the large quantity consumed: In 1671, Melet mentioned the culinary quality of this species, and described the slaughter of several types of birds on the island: The last definite account of the "solitaire" of Réunion was Feuilley's from 1708, indicating that the species probably became extinct sometime early in the century. In the 1820s, the French navigator Louis de Freycinet asked an old slave about drontes (old Dutch word for dodo), and was told the bird existed around Saint-Joseph when his father was an infant. This would perhaps be a century earlier, but the account may be unreliable. Cheke and Hume suspect that feral cats initially hunted wildlife in the lowlands and later turned to higher inland areas, which were probably the last stronghold of the Réunion ibis, as they were unreachable by pigs. The species is thought to have been driven to extinction around 1710–1715. References External links Birds described in 1848 Bird extinctions since 1500 Birds of Réunion Controversial bird taxa † Extinct birds of Indian Ocean islands Ibises Threskiornis Taxa named by Edmond de Sélys Longchamps
Réunion ibis
Biology
4,721
34,963,497
https://en.wikipedia.org/wiki/Insignificance
People may face feelings of insignificance due to a number of causes, including having low self-esteem, being depressed, living in a huge, impersonal city, comparing themselves to wealthy celebrity success stories, working in a huge bureaucracy, or being in awe of a natural wonder. Psychological factors A person's "...sense of personal insignificance comes from two primary experiences: (a) the developmental experience with its increasing awareness of separation and loss, transience, and the sense of lost felt perfectibility; and (b) the increasing cognitive awareness of the immutable laws of biology and the limitations of the self and others in which idealization gives way to painful reality." To deal with feelings of insignificance, "...each individual seeks narcissistic reparation through the elaboration of a personal narrative or myth, a story, which gives one's life a feeling of personal significance, meaning, and purpose." These "...myths provide the individual with a personal sense of identity, and they confirm and affirm memberships in a group or community, and provide guidelines and an idealized set of behaviors..., [and] endorse an explanation for the mysterious universe." In modern society, people living in crowded, anonymous major cities may face feelings of insignificance. George Simmel's work has addressed the issue of how the "dissociation typical of modern city life, the freeing of the person from traditional social ties as from each other" can lead to a "loss or diminution of individuality." Moreover, when a person feels like "...just another face in the crowd, an object of indifference to strangers", it can "lead to feelings of insignificance..." Individuals working in large, bureaucratic organizations who do not have "concrete evidence of success" may have "feelings of insignificance, disillusionment, and helplessness, which are the hallmarks of burnout. Some people in bureaucratic jobs who lack meaningful tasks, and who feel that institutional mechanisms or obstacles prevent them from receiving official recognition for their efforts, may also face boreout. People facing an acute depression constantly have "[g]uiltiness and insignificance feelings". People facing issues of inferiority, due to the subjective, global, and judgmental self-appraisal that they are deficient may also have feelings of insignificance. In the book The Fear of Insignificance, psychologist Carlo Strenger "...diagnoses the wide-spread fear of the global educated class of leading insignificant lives." Strenger warns "...that the global celebrity culture is adding fuel to the 'fear of insignificance' by undermining one’s self-image and sense of self-worth." He noted that "...over recent years people around the world have been suffering from an increasing fear of their own 'insignificance'." He argues that the "impact of the global infotainment network on the individual is to blame," because it has led to the creation of "a new species...homo globalis – global man." In this new system, people "...are defined by our intimate connection to the global infotainment network, which has turned ranking and rating people on scales of wealth and celebrity into an obsession." Strenger states that "...as humans we naturally measure ourselves to those around us, but now that we live in a “global village” we are comparing ourselves with the most “significant” [celebrity] people in the world, and finding ourselves wanting." He notes that "...in the past being a lawyer or doctor was a very reputable profession, but in this day and age, even high achievers constantly fear that they are insignificant when they compare themselves to [celebrity] success stories in the media. Strenger claims that this "...creates highly unstable self-esteem and an unstable society." Alain de Botton describes some of the same issues in his book Status Anxiety. Botton's book examines people's anxiety about whether they are judged a success or a failure. De Botton claims that chronic anxiety about status is an inevitable side effect of any democratic, ostensibly egalitarian society. Edith Wharton stated that “It is less mortifying to believe one's self unpopular than insignificant, and vanity prefers to assume that indifference is a latent form of unfriendliness.” Leo Tolstoy wrote that “If you once realize that to-morrow, if not to-day, you will die and nothing will be left of you, everything becomes insignificant!” In philosophy Blaise Pascal emphasized "the apparent insignificance of human existence, the "...dread of an unknown future", and the "...experience of being dominated by political and natural forces that far exceed our limited powers"; these elements "strike a chord of recognition with some of the existentialist writings that emerged in Europe following the Second World War." Erich Fromm states that in modern capitalist societies, people develop a "...feeling of personal insignificance and powerlessness" due to "...economic recessions, global wars and terrorism." Fromm argues that in capitalist societies, the "...individual became subordinated to capitalist production and worked for profit's sake, for the development of new investment capital and for conspicuous spending." In making people "...work for extrapersonal ends," capitalism made people into a "servant to the very machine he built" and caused feelings of insignificance to arise. In religion Martin Luther believed that the solution to the feelings of insignificance felt by the common person "...was to accept individual insignificance, to submit, to give up individual will and strength and hope to become acceptable to God." In relation to awe A person who is in awe of a monumental natural wonder, such as a massive mountain peak or waterfall, may feel insignificant. Awe is an emotion comparable to wonder but less joyous, and more fearful or respectful. Awe is defined in Robert Plutchik's Wheel of emotions as a combination of surprise and fear. One dictionary definition is "an overwhelming feeling of reverence, admiration, fear, etc., produced by that which is grand, sublime, extremely powerful, or the like: in awe of God; in awe of great political figures". In general awe is directed at objects considered to be more powerful than the subject, such as the breaking of huge waves on the base of a rocky cliff, the thundering roar of a massive waterfall, the Great Pyramid of Giza, the Grand Canyon, or the vastness of open space in the cosmos (e.g., the overview effect). In her column in Scientific American, Jennifer Ouellette referred to the vastness of the cosmos: If one embraces an atheist worldview, it necessarily requires embracing, even celebrating, one's insignificance. It's a tall order, I know, when one is accustomed to being the center of attention. The universe existed in all its vastness before I was born, and it will exist and continue to evolve after I am gone. But knowing that doesn't make me feel bleak or hopeless. I find it strangely comforting. In literary philosophy The concept of "insignificance" is also important to the literary philosophy of cosmicism. One of the prominent themes in cosmicism is the utter insignificance of humanity. H. P. Lovecraft believed that "the human race will disappear. Other races will appear and disappear in turn. The sky will become icy and void, pierced by the feeble light of half-dead stars. Which will also disappear. Everything will disappear." Colin Wilson criticizes “the sense of defeat, or disaster, or futility, that seems to underlie so much...20th century literature", and its tendency "...to portray human existence as insignificant and futile." Wilson "...calls this affliction the "fallacy of insignificance", and as he explains in The Stature of Man this fallacy is unconsciously embedded in the psychology of the modern individual." Wilson argues that the "other-directed individual...is the typical person found in our modern society today and is a victim of the "fallacy of insignificance"." He claims that the "...other directed individual has been conditioned by society to lack self-confidence in their ability to achieve anything of real worth, and thus they conform to society to escape their feelings of unimportance and uselessness." References See also The Festival of Insignificance, a novel by Milan Kundera Emotions Conformity Mental states
Insignificance
Biology
1,844
52,102,225
https://en.wikipedia.org/wiki/NGC%20318
NGC 318 is a lenticular galaxy in the constellation Pisces. It was discovered on November 29, 1850 by Bindon Stoney. References External links 0318 18501129 Pisces (constellation) Lenticular galaxies 003465
NGC 318
Astronomy
50
25,692,725
https://en.wikipedia.org/wiki/Bit-reversal%20permutation
In applied mathematics, a bit-reversal permutation is a permutation of a sequence of items, where is a power of two. It is defined by indexing the elements of the sequence by the numbers from to , representing each of these numbers by its binary representation (padded to have length exactly ), and mapping each item to the item whose representation has the same bits in the reversed order. Repeating the same permutation twice returns to the original ordering on the items, so the bit reversal permutation is an involution. This permutation can be applied to any sequence in linear time while performing only simple index calculations. It has applications in the generation of low-discrepancy sequences and in the evaluation of fast Fourier transforms. Example Consider the sequence of eight letters . Their indexes are the binary numbers 000, 001, 010, 011, 100, 101, 110, and 111, which when reversed become 000, 100, 010, 110, 001, 101, 011, and 111. Thus, the letter a in position 000 is mapped to the same position (000), the letter b in position 001 is mapped to the fifth position (the one numbered 100), etc., giving the new sequence . Repeating the same permutation on this new sequence returns to the starting sequence. Writing the index numbers in decimal (but, as above, starting with position 0 rather than the more conventional start of 1 for a permutation), the bit-reversal permutations on items, for , are: Each permutation in this sequence can be generated by concatenating two sequences of numbers: the previous permutation, with its values doubled, and the same sequence with each value increased by one. Thus, for example doubling the length-4 permutation gives , adding one gives , and concatenating these two sequences gives the length-8 permutation . Generalizations The generalization to radix representations, for , and to , is a digit-reversal permutation, in which the base- digits of the index of each element are reversed to obtain the permuted index. The same idea can also been generalized to mixed radix number systems. In such cases, the digit-reversal permutation should simultaneously reverse the digits of each item and the bases of the number system, so that each reversed digit remains within the range defined by its base. Permutations that generalize the bit-reversal permutation by reversing contiguous blocks of bits within the binary representations of their indices can be used to interleave two equal-length sequences of data in-place. There are two extensions of the bit-reversal permutation to sequences of arbitrary length. These extensions coincide with bit-reversal for sequences whose length is a power of 2, and their purpose is to separate adjacent items in a sequence for the efficient operation of the Kaczmarz algorithm. The first of these extensions, called efficient ordering, operates on composite numbers, and it is based on decomposing the number into its prime components. The second extension, called EBR (extended bit-reversal), is similar in spirit to bit-reversal. Given an array of size , EBR fills the array with a permutation of the numbers in the range in linear time. Successive numbers are separated in the permutation by at least positions. Applications Bit reversal is most important for radix-2 Cooley–Tukey FFT algorithms, where the recursive stages of the algorithm, operating in-place, imply a bit reversal of the inputs or outputs. Similarly, mixed-radix digit reversals arise in mixed-radix Cooley–Tukey FFTs. The bit reversal permutation has also been used to devise lower bounds in distributed computation. The Van der Corput sequence, a low-discrepancy sequence of numbers in the unit interval, is formed by reinterpreting the indexes of the bit-reversal permutation as the fixed-point binary representations of dyadic rational numbers. Bit-reversal permutations are often used in finding lower bounds on dynamic data structures. For example, subject to certain assumptions, the cost of looking up the integers between and , inclusive, in any binary search tree holding those values, is when those numbers are queried in bit-reversed order. This bound applies even to trees like splay trees that are allowed to rearrange their nodes between accesses. Algorithms Mainly because of the importance of fast Fourier transform algorithms, numerous efficient algorithms for applying a bit-reversal permutation to a sequence have been devised. Because the bit-reversal permutation is an involution, it may be performed easily in place (without copying the data into another array) by swapping pairs of elements. In the random-access machine commonly used in algorithm analysis, a simple algorithm that scans the indexes in input order and swaps whenever the scan encounters an index whose reversal is a larger number would perform a linear number of data moves. However, computing the reversal of each index may take a non-constant number of steps. Alternative algorithms can perform a bit reversal permutation in linear time while using only simple index calculations. Because bit-reversal permutations may be repeated multiple times as part of a calculation, it may be helpful to separate out the steps of the algorithm that calculate index data used to represent the permutation (for instance, by using the doubling and concatenation method) from the steps that use the results of this calculation to permute the data (for instance, by scanning the data indexes in order and performing a swap whenever the swapped location is greater than the current index, or by using more sophisticated vector scatter–gather operations). Another consideration that is even more important for the performance of these algorithms is the effect of the memory hierarchy on running time. Because of this effect, more sophisticated algorithms that consider the block structure of memory can be faster than this naive scan. An alternative to these techniques is special computer hardware that allows memory to be accessed both in normal and in bit-reversed order. The performance improvement of bit-reversals in both uniprocessor and multiprocessors has been paid a serious attention in high-performance computing fields. Because architecture-aware algorithm development can best utilize hardware and system software resources, including caches, TLB, and multicores, significantly accelerating the computation. References Permutations FFT algorithms Combinatorial algorithms
Bit-reversal permutation
Mathematics
1,336
56,487,811
https://en.wikipedia.org/wiki/Histology%20and%20Histopathology
Histology and Histopathology is a monthly peer-reviewed medical journal publishing original and review articles in the fields of histology and histopathology. It was established in 1986 and is published by the University of Murcia in Spain. The editors-in-chief are Francisco Hernández and Juan F. Madrid (University of Murcia). According to the Journal Citation Reports, the journal has a 2016 impact factor of 2.025. References External links Histopathology Pathology journals Anatomy journals Academic journals established in 1986 Quarterly journals University of Murcia English-language journals
Histology and Histopathology
Chemistry
121
66,209,912
https://en.wikipedia.org/wiki/Internet%20of%20vehicles
Internet of vehicles (IoV) is a network of vehicles equipped with sensors, software, and the technologies that mediate between these with the aim of connecting & exchanging data over the Internet according to agreed standards. IoV evolved from Vehicular Ad Hoc Networks ("VANET", a category of mobile ad hoc network used for communication between vehicles and roadside systems), and is expected to ultimately evolve into an "Internet of autonomous vehicles". It is expected that IoV will be one of the enablers for an autonomous, connected, electric, and shared (ACES) Future Mobility. Road vehicles as a product category depend upon numerous technology categories from real-time analytics to commodity sensors and embedded systems. For these to operate in symphony the IoV ecosystem is dependent upon modern infrastructure and architectures that distribute computational burden across multiple processing units in a network. In the consumer market, IoV technology is most typically referenced in discussions of smart cities and driverless cars. Many of these architectures depend for their functionality upon open-source software & systems, for instance Subaru whose vehicles' infotainment platform is able to detect a driver's wakefulness and sound an alarm to pull over for a rest. As with other internets connecting real user/consumer experiences with networks to which those user/consumers have no access or control, concerns abound as to risks inherent in the growth of IoV, especially in the areas of privacy and security, and consequently industry and governmental moves to address these concerns have begun including the development of international standards & methods of real-time analysis. These are receiving attention from organisations including the Linux Foundation’s ELISA (Enabling Linux In Safety Applications), the connected vehicles initiative at the Institute of Electrical and Electronics Engineers (IEEE), and the Connected Car Working Group at the Cellular Telecommunications Industry Association (CTIA). See also Artificial intelligence of things Automotive security Cyber-physical system Home automation Indoor positioning system Industry 4.0 Internet of Military Things Internet of things Open Interconnect Consortium OpenWSN Smart grid Web of things References Ambient intelligence Computing and society Digital technology 21st-century inventions
Internet of vehicles
Technology
427
19,905,369
https://en.wikipedia.org/wiki/RAPTOR%20%28software%29
RAPTOR is protein threading software used for protein structure prediction. It has been replaced by RaptorX, which is much more accurate than RAPTOR. Comparison of techniques Protein threading vs. homology modeling Researchers attempting to solve a protein's structure start their study with little more than a protein sequence. Initial steps may include performing a PSI-BLAST or PatternHunter search to locate a similar sequences with a known structure in the Protein Data Bank (PDB). If there are highly similar sequences with known structures, there is a high probability that this protein's structure will be very similar to those known structures as well as functions. If there is no homology found, the researcher must perform either X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy, both of which require considerable time and resources to yield a structure. Where these techniques are too expensive, time-consuming or limited in scope, researchers can use protein threading software, such as RAPTOR to create a highly reliable model of the protein. Protein threading is more effective than homology modeling, especially for proteins which have few homologs detectable by sequence alignment. The two methods both predict protein structure from a template. Given a protein sequence, protein threading first aligns (threads) the sequence to each template in a structure library by optimizing a scoring function that measures the fitness of a sequence-structure alignment. The selected best template is used to build the structure model. Unlike homology modeling, which selects template purely based on homology information (sequence alignments), the scoring function used in protein threading utilizes both homology and structure information (sequence structure alignments). If a sequence has no significant homology found, homology modeling may not give reliable prediction in this case. Without homology information, protein threading can still use structure information to produce good prediction. Failed attempts to obtain a good template with BLAST often result in users processing results through RAPTOR. Integer programming vs. dynamic programming The integer programming approach to RAPTOR produces higher quality models than other protein threading methods. Most threading software use dynamic programming to optimize their scoring functions when aligning a sequence with a template. Dynamic programming is much easier to implement than integer programming; however if a scoring function has pairwise contact potential included, dynamic programming cannot globally optimize such a scoring function and instead just generates a local optimal alignment. Pairwise contacts are very conserved in protein structure and crucial for prediction accuracy. Integer programming can globally optimize a scoring function with pairwise contact potential and produce a global optimal alignment. Components Threading engines NoCore, NPCore and IP are the three different threading engines implemented in RAPTOR. NoCore and NPCore are based on dynamic programming and faster than IP. The difference between them is that in NPCore, a template is parsed into many "core" regions. A core is a structurally conserved region. IP is RAPTOR's unique integer programming-based threading engine. It produces better alignments and models than the other two threading engines. People can always start with NoCore and NPCore. If their predictions are not good enough, IP may be a better choice. After all three methods are run, a simple consensus may help to find the best prediction. 3D structure modeling module The default 3D structure modeling tool used in RAPTOR is OWL. Three-dimensional structure modeling involves two steps. The first step is loop modeling which models regions in the target sequence that map to nothing in the template. After all the loops are modeled and the backbone is ready, side chains are attached to the backbone and packed up. For loop modeling, a cyclic coordinate descent algorithm is used to fill the loops and avoid clashes. For side chain packing, a tree decomposition algorithm is used to pack up all the side chains and avoid any clashes. OWL is automatically called in RAPTOR to generate the 3D output. If a researcher has MODELLER, they can also set up RAPTOR to call MODELLER automatically. RAPTOR can also generate ICM-Pro input files, with which people run ICM-Pro by themselves. PSI-BLAST module To make it a comprehensive tool set, PSI-BLAST is also included in RAPTOR to let people do homology modeling. People can set up all the necessary parameters by themselves. There are two steps involved in running PSI-BLAST. The first step is to generate the sequence profile. For this step, NR non-redundant database is used. The next step is to let PSI-BLAST search the target sequence against the sequences from the Protein Data Bank. Users can also specify their own database for each step. Protein structure viewer There are many different structure viewers. In RAPTOR, Jmol is used as the structure viewer for examining the generated prediction. Output After a threading/PSI-BLAST job, one can see a ranking list of all the templates. For each template, people can view the alignment, E-value and numerous other specific scores. Also, the functional information of the template and its SCOP classification are provided. One can also view the sequence's PSM matrix and secondary structure prediction. If a template has been reported by more than one method, it will be marked with the number of times it has been reported. This helps to identify the best template. Performance in CASP CASP, Critical Assessment of Techniques for Protein Structure Prediction, is a biennial experiment sponsored by NIH. CASP represents the Olympic Games of the protein structure prediction community and was established in 1994. RAPTOR first appeared in CAFASP3 (CASP5) in 2002 and was ranked number one in the individual server group for that year. Since then, RAPTOR has actively participated in every CASP for evaluation purpose and been consistently ranked in the top tier. The most recent CASP8 ran from May 2008 until August 2008. More than 80 prediction servers and more than 100 human expert groups worldwide registered for the event, where participants attempt to predict the 3D structure from a protein sequence. According to the ranking from Zhang's group, RAPTOR ranked 2nd among all the servers (meta server and individual servers). Baker lab's ROBETTA is placed 5th in the same ranking list. Top five prediction servers in CASP8 References Xu J (2005). "Protein Fold Recognition by Predicted Alignment Accuracy". IEEE/ACM Trans. on Computational Biology and Bioinformatics. Xu J (2005). "Rapid Protein Side-Chain Packing via Tree Decomposition". RECOMB. External links RaptorX Website RAPTOR author's research CASP experiments home page Automated assessment of protein structure prediction in CASP8 CAFASP3 alignment accuracy evaluation Molecular modelling software
RAPTOR (software)
Chemistry
1,362
833,814
https://en.wikipedia.org/wiki/Gerald%20Hawkins
Gerald Stanley Hawkins (20 April 1928– 26 May 2003) was a British-born American astronomer and author noted for his work in the field of archaeoastronomy. A professor and chair of the astronomy department at Boston University in the United States, he published in 1963 an analysis of Stonehenge in which he was the first to propose that it was an ancient astronomical observatory used to predict movements of the sun and moon, and that it was used as a computer. Archaeologists and other scholars have since demonstrated such sophisticated, complex planning and construction at other prehistoric earthwork sites, such as Cahokia in the U.S. Early life and education Gerald Hawkins was born in Great Yarmouth, England and studied physics and mathematics at the University of Nottingham. In 1952 he took a PhD in radio astronomy, studying under Sir Bernard Lovell at the University of Manchester. Career In 1957 Hawkins became professor of astronomy and chairman of the department at Boston University in the United States. He wrote widely on numerous subjects, including tektites, meteors and the steady-state universe theory. Born in England, he became an American citizen in 1965. Splendor in the Sky (1961) Hawkins' first book, Splendor in the Sky, is a detailed overview of astronomy. It includes a conventional overview of the history of the field and discusses topics of interest such as the formation and evolution of the Solar System as well as the properties of distant galaxies. However, he dismisses the expansion of the universe from the Big Bang as a false notion, despite the evidence compelling science "to adopt the expanding solution at the present epoch". His dismissal of the Big Bang was based on an alleged lack of "remnants left behind at the seat of the explosion [at the] center of the universe". The cosmic microwave background, which fills space with no preference as to a center, was discovered in 1965 and provided strong evidence for the Big Bang model. Hawkins acknowledges the age of Earth (over 4 billion years old) and the truth of Darwinism, and suggests the poetic compatibility of religious creation myths with cosmology. However, he also refers to the mythological Genesis flood narrative as a historical event that "probably dates back to 4000 B.C." and suggests that a meteorite was sent by God to destroy the biblical cities of Sodom and Gomorrah. He mentions the possible astronomical nature of Stonehenge, an idea he developed into a number of subsequent works. Stonehenge Decoded (1965) Hawkins applied the technological resources of the university to studying the astronomical alignments of ancient megalithic sites. He fed the positions of standing stones and other features at Stonehenge into an early IBM 7090 computer and used the mainframe to model sun and moon movements. In his 1965 book, Stonehenge Decoded (with John B. White), Hawkins argued that the various features at the monument were arranged in such a way as to predict a variety of astronomical events. By interpreting Stonehenge as a giant prehistoric observatory and computer, Hawkins' work what had previously been seen as a primitive temple. The archaeological community was sceptical and his theories were criticized by such noted historians as Richard Atkinson, who denounced the book as being "tendentious, arrogant, slipshod, and unconvincing". Stonehenge Decoded sold widely. It was especially popular among the members of 1960s counterculture, who found that it followed a similar "wisdom of the ancients" line explored by Alexander Thom. Hawkins' theories still inform popular opinion of Stonehenge. Although some archaeologists are cautious to accept Hawkins' theories, many archaeoastronomers have built upon his work. Many scholars accept that the importance of astronomical alignment and large complexes being planned and constructed to fulfill cosmology has been demonstrated at other prehistoric sites, such as the Snake Mound and Cahokia in the U.S. Hawkins later examined the Nazca lines in Peru, and concluded there was not enough evidence to support an astronomical explanation for them. He also studied the temple of Amun at Karnak. He continued to study Stonehenge up until his death. In 1973, he published Beyond Stonehenge. The American Astronomical Society stated in its obituary for Hawkins: See also Aubrey holes References Sources 1928 births 2003 deaths People from Great Yarmouth 20th-century British astronomers Alumni of the University of Nottingham Alumni of the University of Manchester Archaeoastronomers Historians of astronomy Boston University faculty British emigrants to the United States
Gerald Hawkins
Astronomy
917
43,222,381
https://en.wikipedia.org/wiki/Huawei%204G%20eLTE
4G eLTE is Huawei's proprietary derivative of the LTE standard, the "e" standing for "enhanced", intended to provide wireless broadband transmission with peak downlink speeds of 50 Mbit/s and 20 Mbit/s uplink per site in 5 MHz, 10 MHz and 15 MHz frequencies. See also Emergency operations center High Speed Packet Access and HSPA+ Huawei SingleRAN Huawei Symantec – Joint venture between Huawei and Symantec References External links Is Huawei Honor Play Waterproof? Year of introduction missing Huawei products Information and communication technologies for development
Huawei 4G eLTE
Technology
123
68,484,446
https://en.wikipedia.org/wiki/Arden%20Warner
Arden Warner (born 1965 or 1964) is a Barbadian-American particle physicist and inventor, working at the Fermi National Accelerator Laboratory (Fermilab), notable for the creation of a novel environmentally positive, magnetism-based method for cleaning up oil spills, now being developed by Fermilab and a company led by Warner. Early life and education Arden Ayube Warner was born in Barbados, in Eagle Hall, a part of the capital, Bridgetown, in St Michael parish. He had a number of brothers and sisters. He attended school at Wesley Hall Boys’ School, King Street, and Combermere School, Waterford, Bridgetown. He was brought up by his mother, a shopkeeper, and after he completed school, they moved to the USA, where he secured a scholarship to City College of New York, within City University of New York, and later graduated with a degree in engineering and physics from Columbia University. Having worked with some of the US's national laboratories, Warner interned at Brookhaven National Laboratory and the Stanford Linear Accelerator Center (SLAC), and won a fellowship to support his PhD studies. Career Particle physics Warner secured employment in the Accelerator Division of the Fermi National Accelerator Laboratory (often known as Fermilab), Batavia, just outside Chicago, Illinois, where he had been working 26 years by early 2021. He currently works on the Proton Improvement Plan II, one of Fermilab's major projects. Warner also works as a mentor for internees, and has been a member of the steering body at times since the early 1980s of Fermilab's inclusive Summer Internships in Science and Technology (SIST) program, the longest-running of the Department of Energy National Laboratory system's internship programs. As of 2021, he chairs this program. Oil spill cleanup technology The US Department of Energy sought input from the scientists of the national laboratories as it struggled to deal with the cleanup required after the Deepwater Horizon event. With his wife's encouragement, Warner devised and tested an approach involving magnetite and booms which, with Fermilab support, he patented. The technique has been described as especially environmentally friendly, as magnetite is naturally occurring, and can be largely captured and reused. It has been mentioned in a range of magazines, including Scientific American and Popular Science. Warner was a speaker at a TEDx conference in Naperville, near Chicago, in 2015. As of 2019, the technology, which was elaborated as "electromagnetic boom and MOP", was being scaled up for commercial use by a company licensed by Fermilab and led by Warner. Warner's idea was an inspiration for Irish student scientist Fionn Ferreira in his invention of a method for cleaning microplastics from the oceans. Selected publications Lead author for The design and implementation of the machine protection system for the Fermilab electron cooling facility, Warner et al., Fermilab, May 2009 Lead author for Machine Protection System Research and Development for the Fermilab PIP-II Proton Linac, Warner et al., Fermilab, October 2017 Recognition Warner was one of 50 recipients of the Barbados Jubilee Honour in 2016. Personal life Warner is married. He is a member of the board of a Barbadian youth development NGO, The Millennium Fund. References People from Bridgetown People educated at Combermere School City College of New York alumni Columbia University alumni 20th-century American physicists 21st-century American physicists Particle physicists People associated with Fermilab Oil spill remediation technologies 21st-century American inventors Living people 1960s births Barbadian emigrants to the United States
Arden Warner
Physics
740
24,920,595
https://en.wikipedia.org/wiki/Rawnsley%27s%20bowerbird
Rawnsley's bowerbird, also known as Rawnsley's satin bird or the blue regent, is a rare intergeneric hybrid between a satin bowerbird (Ptilonorhynchus violaceus) and a regent bowerbird (Sericulus chrysocephalus). Type specimen It is based on a unique specimen collected by Henry Charles Rawnsley at Witton, near Brisbane in Queensland, Australia, on 14 July 1867. It was described and illustrated (as Ptilonorhynchus rawnsleyi) in the same year by Silvester Diggles in Part 15 of his three-volume work The Ornithology of Australia. It has at various times been considered to be a valid bowerbird species, an aberrant individual of the satin bowerbird, or an adult hybrid individual resulting from the natural crossing of a regent bowerbird with a satin bowerbird. The specimen was lost prior to 1950. Photographs A second example was not recorded until sightings and photographic evidence of another bird were obtained in November 2003 and January 2004 at Beechmont, South East Queensland, adjacent to the Lamington National Park. A further example, a mature male, was photographed in Kalang, New South Wales, in 2014, and was identified by reference to its description on Wikipedia. Description The specimen was described as being in adult male plumage, mainly the glossy blue-black colouring of the adult male satin bowerbird, but with a conspicuous and extensive yellow wing patch, yellow tipping to some tail feathers, with a paler iris colour than the satin bowerbird, and intermediate in size between the two putative parent species. References Notes Sources Birds of Queensland Ptilonorhynchidae Bird hybrids Taxa with lost type specimens Birds described in 1867 Intergeneric hybrids
Rawnsley's bowerbird
Biology
373
2,973,816
https://en.wikipedia.org/wiki/Stickum
Stickum is a trademark adhesive of Mueller Sports Medicine, of Prairie du Sac, Wisconsin, United States. It is available in powder, paste, and aerosol spray forms. According to the company website, the spray form helps improve grip "even in wet conditions". Suggested uses include for bat handles and vaulting poles, with many vendors also promoting the product for use by weightlifters, and for various other athletic applications. Stickum, along with other adhesive or "sticky" substances (such as glue, rosin/tree sap, or food substances), were used for years in the National Football League to assist players in gripping the ball. The use of adhesives such as Stickum was banned by the league in 1981, and the resulting action became known as the "Lester Hayes rule", named after Oakland Raiders defensive back Lester Hayes, known for his frequent use of Stickum. Despite the ban, Hall of Famer Jerry Rice freely admitted to illegally using Stickum throughout his career, leading many fans to question the integrity of his receiving records. Rice's claim that "all players" in his era used Stickum was quickly denied by Hall of Fame contemporaries Cris Carter and Michael Irvin. In the National Basketball Association, Houston Rockets center Dwight Howard was caught using Stickum in a game against the Atlanta Hawks in 2016. References External links Page on the Mueller Sports Medicine website Cheating in sports Sporting goods brands
Stickum
Physics
291
4,820,337
https://en.wikipedia.org/wiki/Computer%20repair%20technician
A computer repair technician is a person who repairs and maintains computers and servers. The technician's responsibilities may extend to include building or configuring new hardware, installing and updating software packages, and creating and maintaining computer networks. Overview Computer technicians work in a variety of settings, encompassing both the public and private sectors. Because of the relatively brief existence of the profession, institutions offer certificate and degree programs designed to prepare new technicians, but computer repairs are frequently performed by experienced and certified technicians who have little formal training in the field. Private sector computer repair technicians can work in corporate information technology departments, central service centers or in retail computer sales environments. Public sector computer repair technicians might work in the military, national security or law enforcement communities, health or public safety field, or an educational institution. Despite the vast variety of work environments, all computer repair technicians perform similar physical and investigative processes, including technical support and often customer service. Experienced computer repair technicians might specialize in fields such as data recovery, system administration, networking or information systems. Some computer repair technicians are self-employed or own a firm that provides services in a regional area. Some are subcontracted as freelancers or consultants. This type of computer repair technician ranges from hobbyists and enthusiasts to those who work professionally in the field. Computer malfunctions can range from a minor setting that is incorrect, to spyware, viruses, and as far as replacing hardware and an entire operating system. Some technicians provide on-site services, usually at an hourly rate. Others can provide services off-site, where the client can drop their computers and other devices off at the repair shop. Some have pickup and drop off services for convenience. Some technicians may also take back old equipment for recycling. This is required in the EU, under WEEE rules. Hardware repair While computer hardware configurations vary widely, a computer repair technician that works on OEM equipment will work with five general categories of hardware; desktop computers, laptops, servers, computer clusters and smartphones / mobile computing devices. Technicians also work with and occasionally repair a range of peripherals, including input devices (like keyboards, mice, webcams and scanners), output devices (like displays, printers, and speakers), and data storage devices such as internal and external hard drives and disk arrays. Technicians involved in system administration might also work with networking hardware, including routers, switches, cabling, fiber optics, and wireless networks. Software repair When possible, computer repair technicians protect the computer user's data and settings. Following a repair, an ideal scenario will give the user access to the same data and settings that were available to them prior to repair. To address a software problem, the technician could take an action as minor as adjusting a single setting or they may implore more involved techniques such as: installing, uninstalling, or reinstalling various software packages. Advanced software repairs often involve directly editing keys and values in the Windows Registry or running commands directly from the command prompt. A reliable, but somewhat more complicated procedure for addressing software issues is known as a system restore (also referred to as imaging, and/or reimaging), in which the computer's original installation image (including operating system and original applications) is reapplied to a formatted hard drive. Anything unique such as settings or personal files will be destroyed and ergo only available if backed up onto external media, as this reverts everything back to its original unused state. The computer technician can only reimage if there is an image of the hard drive for that computer either in a separate partition or stored elsewhere. On a Microsoft Windows system, if there is a restore point that was saved (normally saved on the hard drive of the computer) then the installed applications and Windows Registry can be restored to that point. This procedure may solve problems that have arisen after the time the restore point was created. If no image or system restore point is available, a fresh copy of the operating system is recommended. Formatting and reinstalling the operating system will require the license information from the initial purchase. If none is available, the operating system may require a new licence to be used. Data recovery One of the most common tasks performed by computer repair technicians after software updates and screen repairs is data recovery. This is the process of recovering lost data from a corrupted or otherwise inaccessible hard drive. In most cases third-party data recovery software is used to retrieve the data and transfer it to a new hard drive. Specialists say in about 15% of the cases the data is unable to be recovered as the hard disk is damaged to a point where it will no longer function. Blackblaze's annual report indicates that the hard drive failure rate for the first quarter of 2020 was 1.07%. Education Education requirements vary by company and individual proprietor. The entry level requirement is generally based on the extent of the work expected. Often a 4 year degree will be required for a more specialized technician, where as a general support technician may only require a 2 year degree or some post secondary classes. Certification Common Certifications The most common certification for computer repair technicians are the CompTIA A+ Certification and Network+ Certifications. Additional Certifications Additional certifications are useful when technicians are expanding their skill set. These will be useful when seeking advanced, higher paying positions. These are generally offered by specific software or hardware providers and will give the technician an in-depth knowledge of the systems related to that software or hardware. For instance, the Microsoft Technology Associate and Microsoft Certified Solutions Associate certifications give the technician proof that they have mastered PC fundamentals. Additional Computer Technician Certifications Microsoft (MCSE, MCITP, MCTS) Apple (ACSP, ACTC) International Information Systems Security Certification Consortium (CISSP) Information Systems Audit and Control Association (ISACA) Project Management Professional (PMP) Additional Network Technician Certifications Cisco CCNA and CCNP Cisco CCIE Enterprise Infrastructure and CCIE Enterprise Wireless SolarWinds Certified Professional Wireshark WCNA License In Texas, computer companies and professionals are required to have private investigators’ licenses if they access computer data for purposes other than diagnosis or repair. Texas Occupations Code, Chapter 1702 section 104, subsection 4(b). See also Information systems technician Rework (electronics) 3-Pronged Parts Retriever References Computer occupations People in information technology Technicians
Computer repair technician
Technology
1,293
7,056,912
https://en.wikipedia.org/wiki/Clootie%20well
A clootie well is a holy well (or sacred spring), almost always with a tree growing beside it, where small strips of cloth or ribbons are left as part of a healing ritual, usually by tying them to branches of the tree (called a clootie tree or rag tree). Clootie wells are places of pilgrimage usually found in Celtic areas. It is believed the tradition comes from the ancient custom of leaving votive offerings in water. In Scots, a clootie or cloot is a strip of cloth or rag. Practices When used at the clootie wells in Scotland, Ireland, and the Isle of Man, the pieces of cloth are generally dipped in the water of the holy well and then tied to a branch while a prayer of supplication is said to the spirit of the well – in modern times usually a saint, but in pre-Christian times a goddess or local nature spirit. This is most often done by those seeking healing, though some may do it simply to honour the spirit of the well. In either case, many see this as a probable continuation of the ancient Celtic practice of leaving votive offerings in wells or pits. There are local variations to the practice. At some wells the tradition is to wash the affected part of the body with the wet rag and then tie the washing-rag on the branch; as the rag disintegrates over time, the ailment is supposed to fade away as well. At some wells the clooties are definitely "rags" and discards, at others, brightly coloured strips of fine cloth. In some locations the ceremony may also include circumambulation (or circling) of the well a set number of times and making an offering of a coin, pin or stone. Additional votive offerings hung on the branches or deposited in the wells may include rosaries, religious medals, crosses, religious icons and other symbols of faith. At clootie wells where the operative principle is to shed the ailment, and the clootie is thought to represent the ailment, the "offerings" may be grotesque castoffs. Those that instead view the clootie as an offering to the spirit, saint or deity are more likely to tie an attractive, clean piece of cloth or ribbon. The sacred trees at clootie wells are usually hawthorn trees, though ash trees are also common. The most popular times for pilgrimages to clootie wells, like other holy wells, are on the feast days of Saints, the Pattern or Patron day, or on the old Gaelic festival days of Imbolc (1 February), Beltane (1 May), Lughnasadh (1 August), or Samhain (1 November). Locations In Scotland, by the village of Munlochy on the A832, is a clootie well (called in ) at an ancient spring dedicated to Saint Curetán, where rags are still hung on the surrounding bushes and trees. Here the well was once thought to have had the power to cure sick children who were left there overnight. The site sometimes needs to be cleared of non-biodegradable materials and rubbish such as electrical items and a venetian blind. In the heart of Culloden woods near the battlefield is a walled clootie well also known as St Mary's well. This well was traditionally visited on the first Sunday in May. Until recently, it was a popular holiday, with an ice-cream van situated in the car park. However, this tradition is now in decline although still marked. Craigie Well at Avoch on the Black Isle has both offerings of coins and clooties. Rags, wool and human hair were also used as charms against sorcery, and as tokens of penance or fulfilment of a vow. A clootie well once existed at Kilallan near Kilmacolm in Renfrewshire. This holy well was dedicated to St Fillan and cloth was tied to overhanging shrub branches. In Cornwall, at Madron Well () the practice is to tie the cloth and as it rots the ailment is believed to disappear. In 1894 Madron Well was said to be the only Cornish well where rags were traditionally tied. Rags have only appeared at other Cornish wells such as Alsia Well () and Sancreed Well () in about the last 30 years. Christ's Well at Mentieth was described in 1618 "as all tapestried about with old rags". In Ireland at Loughcrew, Oldcastle, County Meath () there is a wishing tree, where visitors to the passage tombs tie ribbons to the branch of a hawthorn tree. Loughcrew is a site of considerable historical importance in Ireland. It is the site of megalithic burial grounds dating back to approximately 3500 and 3300 BC, situated near the summit of Sliabh na Caillí and on surrounding hills and valleys. Popular culture In 2002, the folklorist Marion Bowman observed that the number of clootie wells had "increased markedly" both at existing and new locations in recent years. She added that those engaged in the practice often conceived of it as an ancient "Celtic" activity which they were perpetuating. A fictional clootie well at Auchterarder features in the 2006 novel The Naming of the Dead by Ian Rankin, who visited the clootie well at Munlochy on Black Isle before writing the book. The 2018 film The Party's Just Beginning, written and directed by Inverness-born filmmaker Karen Gillan, features the Munlochy clootie well. See also Culloden, Scotland Sacred grove Well dressing Wilweorthunga Wish tree Nuragic holy well References Bibliography External links The Clootie Well, Munlochy Pictures of the Clootie Well, Munlochy Ireland – Rag Trees Irish Holy Wells – some with rags and ribbons A mention of the Clootie Well of St Curidan (Scotland) Doon Well, a renowned Holy well in Co. Donegal Irish Landmarks: The Holy Wells of Ireland The Megalithic Portal Includes Holy wells and sacred springs.] Video footage of Saint Queran's Clootie Well. Archaeological artefact types Celtic mythology Pilgrimage sites Rituals Springs (hydrology) Traditional medicine Votive offering Christian holy places
Clootie well
Environmental_science
1,299
8,860,450
https://en.wikipedia.org/wiki/Laurel%20water
Laurel water is distilled from the fresh leaves of the cherry laurel, and contains the poison prussic acid (hydrocyanic acid), along with other products carried over in the process. Pharmacological usage Historically, the water (Latin aqua laurocerasi) was used for asthma, coughs, indigestion and dyspepsia, and as a sedative narcotic; however, since it is effectively a solution of hydrogen cyanide, of uncertain strength, it would be extremely dangerous to attempt medication with laurel water. The Roman emperor Nero used cherry laurel water to poison the wells of his enemies. References Poisons Prunus
Laurel water
Environmental_science
141
20,723,155
https://en.wikipedia.org/wiki/Kaede%20%28protein%29
Kaede is a photoactivatable fluorescent protein naturally originated from a stony coral, Trachyphyllia geoffroyi. Its name means "maple" in Japanese. With the irradiation of ultraviolet light (350–400 nm), Kaede undergoes irreversible photoconversion from green fluorescence to red fluorescence. Kaede is a homotetrameric protein with the size of 116 kDa. The tetrameric structure was deduced as its primary structure is only 28 kDa. This tetramerization possibly makes Kaede have a low tendency to form aggregates when fused to other proteins. Discovery The property of photoconverted fluorescence Kaede protein was serendipitously discovered and first reported by Ando et al. in Proceedings of the United States National Academy of Sciences. An aliquot of Kaede protein was discovered to emit red fluorescence after being left on the bench and exposed to sunlight. Subsequent verification revealed that Kaede, which is originally green fluorescent, after exposure to UV light is photoconverted, becoming red fluorescent. It was then named Kaede. Properties The property of photoconversion in Kaede is contributed by the tripeptide, His-Tyr-Gly, that acts as a green chromophore that can be converted to red. Once Kaede is synthesized, a chromophore, 4-(p-hydroxybenzylidene)-5-imidazolinone, derived from the tripeptide mediates green fluorescence in Kaede. When exposed to UV, Kaede protein undergoes unconventional cleavage between the amide nitrogen and the α carbon (Cα) at His62 via a formal β-elimination reaction. Followed by the formation of a double bond between His62-Cα and –Cβ, the π-conjugation is extended to the imidazole ring of His62. A new chromophore, 2-[(1E)-2-(5-imidazolyl)ethenyl]-4-(p-hydroxybenzylidene)-5-imidazolinone, is formed with the red-emitting property. The cleavage of the tripeptide was analysed by SDS-PAGE analysis. Unconverted green Kaede shows one band at 28 kDa, whereas two bands at 18 kDa and 10 kDa are observed for converted red Kaede, indicating that the cleavage is crucial for the photoconversion. A shifting of the absorption and emission spectrum in Kaede is caused by the cleavage of the tripeptide. Before the photoconversion, Kaede displays a major absorption wavelength maximum at 508 nm, accompanied with a slight shoulder at 475 nm. When it is excited at 480 nm, green fluorescence is emitted with a peak of 518 nm. When Kaede is irradiated with UV or violet light, the major absorption peak shifts to 572 nm. When excited at 540 nm, Kaede showed an emission maximum at 582 nm with a shoulder at 627 nm and the 518-nm peak. Red fluorescence is emitted after this photoconversion. The photoconversion in Kaede is irreversible. Exposure in dark or illumination at 570 nm cannot restore its original green fluorescence. A reduced fluorescence is observed in red, photoconverted Kaede when it is intensively exposed to 405 nm light, followed by partial recover after several minutes. Applications As all other fluorescent proteins, Kaede can be the regional optical markers for gene expression and protein labeling for the study of cell behaviors. One of the most useful applications is the visualization of neurons. Delineation of an individual neuron is difficult due to the long and thin processes which entangle with other neurons. Even when cultured neurons are labeled with fluorescent proteins, they are still difficult to identify individually because of the dense package. In the past, such visualization could be done conventionally by filling neurons with Lucifer yellow or sulforhodamine, which is a laborious technique.[1] After the discovery of Kaede protein, it was found to be useful in delineating individual neurons. The neurons are transfected by Kaede protein cDNA, and are UV irradiated. The red, photoconverted Kaede protein has free diffusibility in the cell except for the nucleus, and spreads over the entire cell including dendrites and axon. This technique help disentangle the complex networks established in a dense culture. Besides, by labeling neurons with different colors by UV irradiating with different duration times, contact sites between the red and green neurons of interest are allowed to be visualized. The ability of visualization of individual cells is also a powerful tool to identify the precise morphology and migratory behaviors of individual cells within living cortical slices. By Kaede protein, a particular pair of daughter cells in neighboring Kaede-positive cells in the ventricular zone of mouse brain slices can be followed. The cell-cell borders of daughter cells are visualized and the position and distance between two or more cells can be described. As the change in the fluorescent colour is induced by UV light, marking of cells and subcellular structures is efficient even when only a partial photoconversion is induced. Advantages as an optical marker Due to the special property of photo-switchable fluorescence, Kaede protein possesses several advantages as an optical cell marker. After the photoconversion, the photoconverted Kaede protein emits bright and stable red fluorescence. This fluorescence can last for months without anaerobic conditions. As this red state of Kaede is bright and stable compared to the green state, and because the unconverted green Kaede emits very low intensity of red fluorescence, the red signals provides contrast. Besides, before the photoconversion, Kaede emits bright green fluorescence which enables the visualization of the localization of the non-photoacivated protein. This is superior to other fluorescent proteins such as PA-GFP and KFP1, which only show low fluorescence before photoactivation. In addition, as both green and red fluorescence of Kaede are excited by blue light at 480 nm for observation, this light will not induce photoconversion. Therefore, illumination lights for observation and photoconversion can be separated completely. Limitations In spite of the usefulness in cell tracking and cell visualization of Kaede, there are some limitations. Although Kaede will shift to red upon the exposure of UV or violet light and display a 2,000-fold increase in red-to-green fluorescence ratio, using both the red and green fluorescence bands can cause problems in multilabel experiments. The tetramerization of Kaede may disturb the localization and trafficking of fusion proteins. This limits the usefulness of Kaede as a fusion protein tag. Ecological significance The photoconversion property of Kaede does not only contribute to the application on protein labeling and cell tracking, it is also responsible for the vast variation in the colour of stony corals, Trachyphyllia geoffroyi. Under sunlight, due to the photoconversion of Kaede, the tentacles and disks will turn red. As green fluorescent Kaede is synthesized continuously, these corals appear green again as more unconverted Kaede is created. By the different proportion of photoconverted and unconverted Kaede, great diversity of colour is found in corals. References Bioluminescence Protein methods Fluorescent proteins
Kaede (protein)
Chemistry,Biology
1,546
73,267,330
https://en.wikipedia.org/wiki/Ville%27s%20inequality
In probability theory, Ville's inequality provides an upper bound on the probability that a supermartingale exceeds a certain value. The inequality is named after Jean Ville, who proved it in 1939. The inequality has applications in statistical testing. Statement Let be a non-negative supermartingale. Then, for any real number The inequality is a generalization of Markov's inequality. References Probabilistic inequalities Martingale theory
Ville's inequality
Mathematics
94
2,408,459
https://en.wikipedia.org/wiki/Welte-Mignon
M. Welte & Sons, Freiburg and New York was a manufacturer of orchestrions, organs and reproducing pianos, established in Vöhrenbach by Michael Welte (1807–1880) in 1832. Overview From 1832 until 1932, the firm produced mechanical musical instruments of the highest quality. The firm's founder, Michael Welte (1807-1880), and his company were prominent in the technical development and construction of orchestrions from 1850, until the early 20th century. In 1872, the firm moved from the remote Black Forest town of Vöhrenbach into a newly developed business complex beneath the main railway station in Freiburg, Germany. They created an epoch-making development when they substituted the playing gear of their instruments from fragile wood pinned cylinders to perforated paper rolls. In 1883, Emil Welte (1841–1923), the eldest son of Michael, who had emigrated to the United States in 1865, patented the paper roll method (), the model of the later piano roll. In 1889, the technique was further perfected, and again protected through patents. Later, Welte built only instruments using the new technique, which was also licensed to other companies. With branches in New York and Moscow, and representatives throughout the world, Welte became very well known. The firm was already famous for its inventions in the field of the reproduction of music when Welte introduced the Welte-Mignon reproducing piano in 1904. "It automatically replayed the tempo, phrasing, dynamics and pedalling of a particular performance, and not just the notes of the music, as was the case with other player pianos of the time." In September, 1904, the Mignon was demonstrated in the Leipzig Trade Fair. In March, 1905 it became better known when showcased "at the showrooms of Hugo Popper, a manufacturer of roll-operated orchestrions". By 1906, the Mignon was also exported to the United States, installed to pianos by the firms Feurich and Steinway & Sons. As a result of this invention by Edwin Welte (1876–1958) and his brother-in-law Karl Bockisch (1874–1952), one could now record and reproduce the music played by a pianist as true to life as was technologically possible at the time. Pianists who recorded for Welte-Mignon included Anna Schytte. A Steinway Welte-Mignon reproducing piano and several other player pianos and reproducing pianos can be seen and heard at the Musical Museum, Brentford, England. Welte Philharmonic Organ From 1911 on, a similar system for organs branded "Welte Philharmonic-Organ" was produced. Thirteen well-known European organist-composers of the era, among them Alfred Hollins, Eugene Gigout and Max Reger were photographed recording for the organ, distinguished organists like Edwin Lemare, Clarence Eddy and Joseph Bonnet were recorded too. The largest Philharmonic Organ ever built is at the Salomons Estate of the Markerstudy Group. This instrument was built in 1914 for Sir David Lionel Salomons to play not only rolls for the organ but also for his Welte Orchestrion No. 10 from about 1900, which he traded in for the organ. One of these organs can also be seen in the Scotty's Castle museum in Death Valley where it is played regularly during museum tours. An organ built for HMHS Britannic never made its way to Belfast due to the outbreak of the First World War. It can currently be heard playing in the Swiss National Museum in Seewen. Welte Inc. In 1912 a new company was founded, the "M. Welte & Sons. Inc." in New York, and a new factory was built in Poughkeepsie, New York. Shareholders were predominantly family members in the U.S. and Germany, among them Barney Dreyfuss, Edwin's brother-in-law. As a result of the Alien Property Custodian enactment during the First World War, the company lost their American branch and all of their U.S. patents. This caused the company great economic hardship. Later the Great Depression and the mass production of new technologies like the radio and the electric record player in the 1920s virtually brought about the demise of the firm and its expensive instruments. Other companies with similar products like American Piano Company (Ampico) and Duo-Art also began to fade from the scene at this time. From 1919 on, Welte also built theatre organs, in particular for installation in cinemas. With the introduction of "talkies" around 1927, the demand for these also began to diminish, and by 1931 production of such instruments had been severely curtailed. The last big theatre organ was a custom-built instrument for the Norddeutscher Rundfunk (NORAG) broadcasting company in Hamburg, still in place and still playing today. A number of other Welte theatre organs survive in museums. In 1932 the firm, now with Karl Bockisch as sole owner, barely escaped bankruptcy, and began to concentrate on the production of church and other speciality organs. The last project of Edwin Welte was an electronic organ equipped with photo-cells, the or Phototone-Organ. This instrument was the first ever to use analogue sampled sound. In 1936, a prototype of this type of organ was demonstrated at a concert in the Berliner Philharmonie. The production of these organs - in cooperation with the Telefunken Company – was halted by the Nazi-government because the inventor, Edwin Welte, was married to Betty Dreyfuss, who was Jewish. The business complex in Freiburg was bombed and completely destroyed in November 1944. This event seemed to obliterate the closely kept secrets of the firm and their recording apparatus and recording process appeared lost forever. But in recent years parts of the recording apparatus for the Welte Philharmonic-Organs and documents were found in the United States. It was then possible to theoretically reconstruct the recording process. The Augustiner Museum of Freiburg keeps the legacy of the company – all that survived the Second World War. Media Ossip Gabrilowitsch plays for Welte-Mignon on July 4, 1905 Johannes Brahms Intermezzo in C major, Op. 119, No. 3* Arthur Nikisch plays for Welte-Mignon on February 9, 1906 Johannes Brahms Hungarian Dance No. 5* Gabriel Faure plays his Pavane, Op.50 1913 See and hear a Welte-Mignon piano roll play Mon Reve"] by Hanna Vollenhoven Welte Mignon made several organs for important churches as did Welte-Tripp. One of the last surviving instruments is in the Church of the Covenant, Boston Mass. This was restored by Austin several years ago - supposedly to the original state. It was altered by an organist in 1959 or 1960. Until that time it has been careful restored and releathered by the Reed-Treanor organ company. This included the entire combination action in the console and the manual relays in the church basement and the repair of the massive 25 HP DC motor that powered the Spencer Turbine blower. During the two years they cared for the organ no tonal or structural changes were made. References Notes Sources Wie von Geisterhand. Aus Seewen in die Welt. 100 Jahre Welte-Philharmonie-Orgel. Museum für Musikautomaten, Seewen (SO), Switzerland, 2011. Gerhard Dangel: The history of the Welte family and the house of M. Welte & Sons. In: The Pianola Journal, No. 18, London 2007, p. 3-49. Gerhard Dangel und Hans-W. Schmitz: Welte-Mignon piano rolls: complete library of the European recordings 1904 - 1932 for the Welte-Mignon reproducing piano. Welte-Mignon Klavierrollen: Gesamtkatalog der europäischen Aufnahmen 1904 - 1932 für das Welte-Mignon Reproduktionspiano. Stuttgart 2006. Automatische Musikinstrumente aus Freiburg in die Welt - 100 Jahre Welte-Mignon: Augustinermuseum Freiburg, Exhibition from September 17, 2005 to January 8, 2006 / [Ed.: Augustinermuseum]. With contrib. by Durward Rowland Center, Gerhard Dangel, ... [Red.: Gerhard Dangel]. Freiburg : Augustinermuseum, 2005. Hermann Gottschewski: Die Interpretation als Kunstwerk: musikalische Zeitgestaltung und ihre Analyse am Beispiel von Welte-Mignon-Klavieraufnahmen aus dem Jahre 1905. Laaber: Laaber-Verlag 1996. Charles David Smith and Richard James Howe: The Welte-Mignon: its music and musicians. Vestal, NY: Vestal Press, 1994. Quirin David Bowers: Encyclopedia of automatic musical instruments: Cylinder music boxes, disc music boxes, piano players and player pianos... Incl. a dictionary of automatic musical instrument terms. Vestal, N. Y.: The Vestal Press, 1988. Gerhard Dangel: Geschichte der Firma M. Welte & Söhne Freiburg i. B. und New York. Freiburg: Augustinermuseum 1991. Peter Hagmann: Das Welte-Mignon-Klavier, die Welte-Philharmonie-Orgel und die Anfänge der Reproduktion von Musik''. Bern [u.a.: Lang, 1984. Online-Version 2002 External links The Welte-Mignon portal for reproducing pianos Complete listing of all Welte-Mignon-Rolls A discussion of the Welte-Mignon, in English, published by the Pianola Institute, London, with many illustrations and audio examples www.pianola.org The Player Piano Group - the UK's main Player Piano society The Pianola Forum - online discussion group The International Association of Player-Piano, Roll-Playing and Automatic Instrument Enthusiasts Musical Box Society International The Welte Organ at Salomon Campus, Canterbury Christ Church University College The Restoration of Sir David Lionel Salomons Organ in Royal Tunbridge Wells German Society for self-playing instruments / Gesellschaft für selbstspielende Musikinstrumente e.V. Welte Wireless Organ Hamburg Articles Das Welte-Mignon-Klavier, die Welte-Philharmonie-Orgel und die Anfänge der Reproduktion von Musik by Peter Hagmann (1984) Defunct companies of Germany Pipe organ building companies Piano manufacturing companies of Germany Mechanical musical instruments Musical instrument manufacturing companies of Germany Companies based in Baden-Württemberg
Welte-Mignon
Physics,Technology
2,265
4,698,944
https://en.wikipedia.org/wiki/Comparison%20of%20issue-tracking%20systems
Notable issue tracking systems, including bug tracking systems, help desk and service desk issue tracking systems, as well as asset management systems, include the following. The comparison includes client-server application, distributed and hosted systems. General System names listed with a light purple background are no longer in active development. Features Input interfaces Notification interfaces Revision control system integration Authentication methods Containers See also Comparison of help desk issue tracking software List of personal information managers Comparison of project management software Networked Help Desk OSS through Java Notes References External links Issue tracking systems
Comparison of issue-tracking systems
Technology
107
245,944
https://en.wikipedia.org/wiki/Durability%20%28database%20systems%29
In database systems, durability is the ACID property that guarantees that the effects of transactions that have been committed will survive permanently, even in cases of failures, including incidents and catastrophic events. For example, if a flight booking reports that a seat has successfully been booked, then the seat will remain booked even if the system crashes. Formally, a database system ensures the durability property if it tolerates three types of failures: transaction, system, and media failures. In particular, a transaction fails if its execution is interrupted before all its operations have been processed by the system. These kinds of interruptions can be originated at the transaction level by data-entry errors, operator cancellation, timeout, or application-specific errors, like withdrawing money from a bank account with insufficient funds. At the system level, a failure occurs if the contents of the volatile storage are lost, due, for instance, to system crashes, like out-of-memory events. At the media level, where media means a stable storage that withstands system failures, failures happen when the stable storage, or part of it, is lost. These cases are typically represented by disk failures. Thus, to be durable, the database system should implement strategies and operations that guarantee that the effects of transactions that have been committed before the failure will survive the event (even by reconstruction), while the changes of incomplete transactions, which have not been committed yet at the time of failure, will be reverted and will not affect the state of the database system. These behaviours are proven to be correct when the execution of transactions has respectively the resilience and recoverability properties. Mechanisms In transaction-based systems, the mechanisms that assure durability are historically associated with the concept of reliability of systems, as proposed by Jim Gray in 1981. This concept includes durability, but it also relies on aspects of the atomicity and consistency properties. Specifically, a reliability mechanism requires primitives that explicitly state the beginning, the end, and the rollback of transactions, which are also implied for the other two aforementioned properties. In this article, only the mechanisms strictly related to durability have been considered. These mechanisms are divided into three levels: transaction, system, and media level. This can be seen as well for scenarios where failures could happen and that have to be considered in the design of database systems to address durability. Transaction level Durability against failures that occur at transaction level, such as canceled calls and inconsistent actions that may be blocked before committing by constraints and triggers, is guaranteed by the serializability property of the execution of transactions. The state generated by the effects of precedently committed transactions is available in main memory and, thus, is resilient, while the changes carried by non-committed transactions can be undone. In fact, thanks to serializability, they can be discerned from other transactions and, therefore, their changes are discarded. In addition, it is relevant to consider that in-place changes, which overwrite old values without keeping any kind of history are discouraged. There exist multiple approaches that keep track of the history of changes, such as timestamp-based solutions or logging and locking. System level At system level, failures happen, by definition, when the contents of the volatile storage are lost. This can occur in events like system crashes or power outages. Existing database systems use volatile storage (i.e. the main memory of the system) for different purposes: some store their whole state and data in it, even without any durability guarantee; others keep the state and the data, or part of them, in memory, but also use the non-volatile storage for data; other systems only keep the state in main memory, while keeping all the data on disk. The reason behind the choice of having volatile storage, which is subject to this type of failure, and non-volatile storage, is found in the performance differences of the existing technologies that are used to implement these kinds of storage. However, the situation is likely to evolve as the popularity of non-volatile memories (NVM) technologies grows. In systems that include non-volatile storage, durability can be achieved by keeping and flushing an immutable sequential log of the transactions to such non-volatile storage before acknowledging commitment. Thanks to their atomicity property, the transactions can be considered the unit of work in the recovery process that guarantees durability while exploiting the log. In particular, the logging mechanism is called write-ahead log (WAL) and allows durability by buffering changes to the disk before they are synchronized from the main memory. In this way, by reconstruction from the log file, all committed transactions are resilient to system-level failures, because they can be redone. Non-committed transactions, instead, are recoverable, since their operations are logged to non-volatile storage before they effectively modify the state of the database. In this way, the partially executed operations can be undone without affecting the state of the system. After that, those transactions that were incomplete can be redone. Therefore, the transaction log from non-volatile storage can be reprocessed to recreate the system state right before any later system-level failure. Logging is done as a combination of tracking data and operations (i.e. transactions) for performance reasons. Media level At media level, failure scenarios affect non-volatile storage, like hard disk drives, solid-state drives, and other types of storage hardware components. To guarantee durability at this level, the database system shall rely on stable memory, which is a memory that is completely and ideally failure-resistant. This kind of memory can be achieved with mechanisms of replication and robust writing protocols. Many tools and technologies are available to provide a logical stable memory, such as the mirroring of disks, and their choice depends on the requirements of the specific applications. In general, replication and redundancy strategies and architectures that behave like stable memory are available at different levels of the technology stack. In this way, even in case of catastrophic events where the storage hardware is damaged, data loss can be prevented. At this level, there is a strong bond between durability and system and data recovery, in the sense that the main goal is to preserve the data, not necessarily in online replicas, but also as offline copies. These last techniques fall into the categories of backup, data loss prevention, and IT disaster recovery. Therefore, in case of media failure, the durability of transactions is guaranteed by the ability to reconstruct the state of the database from the log files stored in the stable memory, in any way it was implemented in the database system. There exist several mechanisms to store and reconstruct the state of a database system that improves the performance, both in terms of space and time, compared to managing all the log files created from the beginning of the database system. These mechanisms often include incremental dumping, differential files, and checkpoints. Distributed databases In distributed transactions, ensuring durability requires additional mechanisms to preserve a consistent state sequence across all database nodes. This means, for example, that a single node may not be enough to decide to conclude a transaction by committing it. In fact, the resources used in that transaction may be on other nodes, where other transactions are occurring concurrently. Otherwise, in case of failure, if consistency could not be guaranteed, it would be impossible to acknowledge a safe state of the database for recovery. For this reason, all participating nodes must coordinate before a commit can be acknowledged. This is usually done by a two-phase commit protocol. In addition, in distributed databases, even the protocols for logging and recovery shall address the issues of distributed environments, such as deadlocks, that could prevent the resilience and recoverability of transactions and, thus, durability. A widely adopted family of algorithms that ensures these properties is Algorithms for Recovery and Isolation Exploiting Semantics (ARIES). See also Atomicity Consistency Isolation Relational database management system Data breach References Further reading External links Durability aspects in Oracle's databases MySQL InnoDB documentation on support of ACID properties PostgreSQL's documentation on reliability Microsoft SQL Server Control Transaction Durability Interactive latency visualization for different types of storages from Berkeley Data engineering Transaction processing
Durability (database systems)
Engineering
1,697
1,000,609
https://en.wikipedia.org/wiki/Dallasite
Dallasite is a breccia made of subequant to rectangular or distinctly elongate, curvilinear shards that represent the spalled rims of pillow basalt (see: Hyaloclastite). This material is commonly partly altered to chlorite, epidote, quartz and carbonate, for which the local term 'dallasite' has been coined. The stone dallasite is named after Dallas Road, Victoria, British Columbia. It is considered the unofficial stone of British Columbia's capital city. Dallasite is found in Triassic volcanic rocks of Vancouver Island and is considered the third most important gem material in British Columbia. References Rocks Gemstones Geology of British Columbia Breccias
Dallasite
Physics,Materials_science
146
59,222,107
https://en.wikipedia.org/wiki/Pairing%20strategy
In a positional game, a pairing strategy is a strategy that a player can use to guarantee victory, or at least force a draw. It is based on dividing the positions on the game-board into disjoint pairs. Whenever the opponent picks a position in a pair, the player picks the other position in the same pair. Example Consider the 5-by-5 variant of Tic-tac-toe. We can create 12 pairwise-disjoint pairs of board positions, denoted by 1,...,12 below: Note that the central element (denoted by *) does not belong to any pair; it is not needed in this strategy. Each horizontal, vertical or diagonal line contains at least one pair. Therefore the following pairing strategy can be used to force a draw: "whenever your opponent chooses an element of pair i, choose the other element of pair i". At the end of the game, you have an element of each winning-line. Therefore, you guarantee that the other player cannot win. Since both players can use this strategy, the game is a draw. This example is generalized below for an arbitrary Maker-Breaker game. In such a game, the goal of Maker is to occupy an entire winning-set, while the goal of Breaker is to prevent this by owning an element in each winning-set. Pairing strategy for Maker A pairing-strategy for Maker requires a set of element-pairs such that: All pairs are pairwise-disjoint; Every set that contains at least one element from each pair, contains some winning-set. Whenever Breaker picks an element of a pair, Maker picks the other element of the same pair. At the end, Maker's set contains at least one element from each pair; by condition 2, he occupies an entire winning-set (this is true even when Maker plays second). As an example, consider a game-board containing all vertices in a perfect binary tree except the root. The winning-sets are all the paths from the leaf to one of the two children of the root. We can partition the elements into pairs by pairing each element with its sibling. The pairing-strategy guarantees that Maker wins even when playing second. If Maker plays first, he can win even when the game-board contains also the root: in the first step he just picks the root, and from then on plays the above pairing-strategy. Pairing strategy for Breaker A pairing-strategy for Breaker requires a set of element-pairs such that: All pairs are pairwise-disjoint; Every winning-set contains at least one pair. Whenever Maker picks an element of a pair, Breaker picks the other element of the same pair. At the end, Breaker has an element in each pair; by condition 2, he has an element in each winning-set. An example of such pairing-strategy for 5-by-5 tic-tac-toe is shown above. show other examples for 4x4 and 6x6 tic-tac-toe. Another simple case when Breaker has a pairing-strategy is when all winning-sets are pairwise-disjoint and their size is at least 2. References Positional games
Pairing strategy
Mathematics
660
9,830,701
https://en.wikipedia.org/wiki/List%20of%20tallest%20statues
This list of tallest statues includes completed statues that are at least tall. The height values in this list are measured to the highest part of the human (or animal) figure, but exclude the height of any pedestal (plinth), or other base platform as well as any mast, spire, or other structure that extends higher than the tallest figure in the monument. The definition of for this list is a free-standing sculpture (as opposed to a relief), representing one or more people or animals (real or mythical), in their entirety or partially (such as a bust). Heights stated are those of the statue itself and (separately) the total height of the monument that includes structures the statue is standing on or holding. Monuments that contain statues are included in this list only if the statue fulfills these and the height criteria. Existing By country/region Destroyed Proposed or under construction See also List of statues List of tallest bridges List of tallest buildings List of tallest structures List of the tallest statues in India List of the tallest statues in Mexico List of the tallest statues in Sri Lanka List of the tallest statues in the United States List of tallest Hindu statues List of colossal sculpture in situ List of largest monoliths New 7 Wonders of the World Notes References External links Top 10 highest monuments – Architecture Portal News Top highest monuments in the World 中國13尊大佛 The tallest statues in the world – Video By Top 10 Hindi Statues Statues
List of tallest statues
Physics,Mathematics
290
40,646,378
https://en.wikipedia.org/wiki/Stonehurst%20Historic%20Preservation%20Overlay%20Zone
The Stonehurst Historic Preservation Overlay Zone is located in the Sun Valley neighborhood of Los Angeles, in the northeastern San Fernando Valley. It is a city-designated Historic Preservation Overlay Zone (HPOZ). Architecture Most of the 92 homes were built between 1923 and 1925 by Dan Montelongo, using local river stone from the Tujunga Wash. The neighborhood has the highest concentration of homes utilizing native river rock as a primary building material in Los Angeles. The bungalows are often characterized as being "Stonemason Vernacular," a derivative of the American Craftsman architectural style. The 1930 Stonehurst Park Community Building, also by Dan Montelongo, is a Los Angeles Historic-Cultural Monument in the HPOZ. See also List of Los Angeles Historic-Cultural Monuments in the San Fernando Valley History of the San Fernando Valley References External links Map of the Stonehurst Historic Preservation Overlay Zone properties Stonehurst HPOZ Preservation Plan — photographs of Stonehurst houses, and history of Stonehurst. Sun Valley, Los Angeles Los Angeles Historic Preservation Overlay Zones History of the San Fernando Valley American Craftsman architecture in California
Stonehurst Historic Preservation Overlay Zone
Engineering
223
2,546,505
https://en.wikipedia.org/wiki/Mogul%20lamp
A mogul lamp or six way lamp is a floor lamp which has a large center light bulb surrounded by three (or four) smaller bulbs that may be candelabra-style or standard medium-base bulbs, each mounted base-down. This entire setting is typically covered, at least partially, by a large cylindrical (or bell-shaped) fabric shade which is fitted over the reflector bowl, an upturned, white-colored glass, hemispherical diffuser surrounding the center bulb. The top of the lamp is usually designed to sit just above eye-level of an average adult person standing next to it, to avoid unpleasant glare from unshaded bulbs. Etymology The lamp is named after the Great Mogul. Details The bulb socket in the center has a larger diameter (an E39 or E40 mogul base) than a regular E26 or E27 Edison screw light socket, and is typically made of cast porcelain for the higher temperatures. Mogul-base lamps are available for industrial use in larger power ratings (250–1500) and in halogen, mercury vapor, high-pressure sodium and metal-halide lamp configurations. Compact fluorescent mogul-base bulbs are also available, as are adaptors to allow medium-base bulbs to be used in mogul sockets. There are usually two three-way switches near the top of the floor lamp to operate the bulbs. One controls the three-way center bulb, and the other turns on one, two, or all three (or four) of the peripheral bulbs. The center bulb may be very high power (often a three-way, 100-200-300 watt bulb), where the others are usually 60 watts or less. Some models have a night light in the base operated by a foot switch. One model turns the current light settings on or off by moving the lamp pole up or down. This design allows sixteen different combinations of brightness to be obtained. The result is that one lamp can provide a very soft, diffuse glow or quickly adjusted to illuminate an entire room, and everything in between. Popular in the 1920s and 1930s, mogul lamps can be obtained in thrift or antique stores and can still be purchased new. Mogul lamps and mathematics Mogul lamps are also the subject of a mathematics problem concerning the number of possible combinations of power that can be obtained. As it turns out, the name "Six Way Lamp" is somewhat deceiving since there are in fact 16 possible combinations (without the night-light), including combinations with all lamps of either switch off. The term probably comes from the fact that the design incorporates two "three-way" switches. See also Floor lamp Light fixtures
Mogul lamp
Engineering
549
4,163,498
https://en.wikipedia.org/wiki/Minimum%20Data%20Set
The Minimum Data Set (MDS) is part of the U.S. federally mandated process for clinical assessment of all residents in Medicare or Medicaid certified nursing homes and non-critical access hospitals with Medicare swing bed agreements. (The term "swing bed" refers to the Social Security Act's authorizing small, rural hospitals to use their beds in both an acute care and Skilled Nursing Facility (SNF) capacity, as needed.) Description This process provides a comprehensive assessment of each resident's functional capabilities and helps nursing home and SNF staff identify health problems. Resource Utilization Groups (RUG) are part of this process, and provide the foundation upon which a resident's individual care plan is formulated. MDS assessment forms are completed for all residents in certified nursing homes, including SNFs, regardless of source of payment for the individual resident. MDS assessments are required for residents on admission to the nursing facility and then periodically, within specific guidelines and time frames. Participants in the assessment process are health care professionals and direct care staff such as registered nurses, licensed practical or vocational nurses (LPN/LVN), Therapists, Social Services, Activities and Dietary staff employed by the nursing home. MDS information is transmitted electronically by nursing homes to the MDS database in their respective states. MDS information from the state databases is captured into the national MDS database at Centers for Medicare and Medicaid Services (CMS). Sections of MDS (Minimum Data Set): Identification Information Hearing, Speech and Vision Cognitive Patterns Mood Behavior Preferences for Customary Routine and Activities Functional Status Functional Abilities and Goals Bladder and Bowel Active Diagnoses Health Conditions Swallowing/Nutritional Status Oral/Dental Status Skin Conditions Medications Special Treatments, Procedures and Programs Restraints Participation in Assessment and Goal Setting Care Area Assessment (CAA) Summary Correction Request Assessment Administration The MDS is updated by the Centers for Medicare and Medicaid Services. Specific coding regulations in completing the MDS can be found in the Resident Assessment Instrument User's Guide. Versions of the Minimum Data Set has been used or is being utilized in other countries. See also Nursing Minimum Data Set (NMDS), US National minimum dataset, in health informatics National Minimum Data Set for Social Care (NMDS-SC), England References General CMS - MDS Quality Indicator and Resident Reports Centers for Medicare & Medicaid Services Long Term Care Facility Resident Assessment Instrument 3.0 User's Manual Version 1.16 October 2018 Health informatics Medicare and Medicaid (United States)
Minimum Data Set
Biology
517
149,306
https://en.wikipedia.org/wiki/National%20Center%20for%20Biotechnology%20Information
The National Center for Biotechnology Information (NCBI) is part of the (NLM), a branch of the National Institutes of Health (NIH). It is approved and funded by the government of the United States. The NCBI is located in Bethesda, Maryland, and was founded in 1988 through legislation sponsored by US Congressman Claude Pepper. The NCBI houses a series of databases relevant to biotechnology and biomedicine and is an important resource for bioinformatics tools and services. Major databases include GenBank for DNA sequences and PubMed, a bibliographic database for biomedical literature. Other databases include the NCBI Epigenomics database. All these databases are available online through the Entrez search engine. NCBI was directed by David Lipman, one of the original authors of the BLAST sequence alignment program and a widely respected figure in bioinformatics. GenBank NCBI had responsibility for making available the GenBank DNA sequence database since 1992. GenBank coordinates with individual laboratories and other sequence databases, such as those of the European Molecular Biology Laboratory (EMBL) and the DNA Data Bank of Japan (DDBJ). Since 1992, NCBI has grown to provide other databases in addition to GenBank. NCBI provides the Gene database, Online Mendelian Inheritance in Man, the Molecular Modeling Database (3D protein structures), dbSNP (a database of single-nucleotide polymorphisms), the Reference Sequence Collection, a map of the human genome, and a taxonomy browser, and coordinates with the National Cancer Institute to provide the Cancer Genome Anatomy Project. The NCBI assigns a unique identifier (taxonomy ID number) to each species of organism. The NCBI has software tools that are available through web browsers or by FTP. For example, BLAST is a sequence similarity searching program. BLAST can do sequence comparisons against the GenBank DNA database in less than 15 seconds. NCBI Bookshelf The NCBI Bookshelf is a collection of freely accessible, downloadable, online versions of selected biomedical books. The Bookshelf covers a wide range of topics including molecular biology, biochemistry, cell biology, genetics, microbiology, disease states from a molecular and cellular point of view, research methods, and virology. Some of the books are online versions of previously published books, while others, such as Coffee Break, are written and edited by NCBI staff. The Bookshelf is a complement to the Entrez PubMed repository of peer-reviewed publication abstracts in that Bookshelf contents provide established perspectives on evolving areas of study and a context in which many disparate individual pieces of reported research can be organized. Basic Local Alignment Search Tool (BLAST) BLAST is an algorithm used for calculating sequence similarity between biological sequences, such as nucleotide sequences of DNA and amino acid sequences of proteins. BLAST is a powerful tool for finding sequences similar to the query sequence within the same organism or in different organisms. It searches the query sequence on NCBI databases and servers and posts the results back to the person's browser in the chosen format. Input sequences to the BLAST are mostly in FASTA or GenBank format while output could be delivered in a variety of formats such as HTML, XML formatting, and plain text. HTML is the default output format for NCBI's web-page. Results for NCBI-BLAST are presented in graphical format with all the hits found, a table with sequence identifiers for the hits having scoring related data, along with the alignments for the sequence of interest and the hits received with analogous BLAST scores for these. Entrez The Entrez Global Query Cross-Database Search System is used at NCBI for all the major databases such as Nucleotide and Protein Sequences, Protein Structures, PubMed, Taxonomy, Complete Genomes, OMIM, and several others. Entrez is both an indexing and retrieval system having data from various sources for biomedical research. NCBI distributed the first version of Entrez in 1991, composed of nucleotide sequences from PDB and GenBank, protein sequences from SWISS-PROT, translated GenBank, PIR, PRF, PDB, and associated abstracts and citations from PubMed. Entrez is specially designed to integrate the data from several different sources, databases, and formats into a uniform information model and retrieval system which can efficiently retrieve that relevant references, sequences, and structures. Gene Gene has been implemented at NCBI to characterize and organize the information about genes. It serves as a major node in the nexus of the genomic map, expression, sequence, protein function, structure, and homology data. A unique GeneID is assigned to each gene record that can be followed through revision cycles. Gene records for known or predicted genes are established here and are demarcated by map positions or nucleotide sequences. Gene has several advantages over its predecessor, LocusLink, including, better integration with other databases in NCBI, broader taxonomic scope, and enhanced options for query and retrieval provided by the Entrez system. Protein Protein database maintains the text record for individual protein sequences, derived from many different resources such as NCBI Reference Sequence (RefSeq) project, GenBank, PDB, and UniProtKB/SWISS-Prot. Protein records are present in different formats including FASTA and XML and are linked to other NCBI resources. Protein provides the relevant data to the users such as genes, DNA/RNA sequences, biological pathways, expression and variation data, and literature. It also provides the predetermined sets of similar and identical proteins for each sequence as computed by the BLAST. The Structure database of NCBI contains 3D coordinate sets for experimentally-determined structures in PDB that are imported by NCBI. The Conserved Domain database (CDD) of protein contains sequence profiles that characterize highly conserved domains within protein sequences. It also has records from external resources like SMART and Pfam. There is another database of proteins known as Protein Clusters database, which contains sets of proteins sequences that are clustered according to the maximum alignments between the individual sequences as calculated by BLAST. Pubchem database PubChem database of NCBI is a public resource for molecules and their activities against biological assays. PubChem is searchable and accessible by Entrez information retrieval system. See also DNA Data Bank of Japan (DDBJ) European Bioinformatics Institute (EBI) References External links of the National Library of Medicine of the National Institutes of Health 1988 establishments in Maryland Bethesda, Maryland Biotechnology organizations Medical research institutes in Maryland NCBI Online databases Online taxonomy databases Scientific organizations established in 1988
National Center for Biotechnology Information
Engineering,Biology
1,357
46,868,492
https://en.wikipedia.org/wiki/Verticillium%20zaregamsianum
Verticillium zaregamsianum is a fungus often found in lettuce in Japan. It can cause verticillium wilt in some plant species. It produces yellow-pigmented hyphae and microsclerotia, while producing few chlamydospores and with sparse resting mycelium. It is most closely related to V. tricorpus. References Further reading Inderbitzin, Patrik, et al. "Identification and differentiation of Verticillium species and V. longisporum lineages by simplex and multiplex PCR assays." PLoS ONE8.6 (2013): e65990. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0065990 Stajner, Natasa. "Identification and Differentiation of Verticillium Species with PCR Markers and Sequencing of ITS Region." Plant and Animal Genome XXIII Conference. Plant and Animal Genome. External links Fungal plant pathogens and diseases Fungi described in 2011 Enigmatic Hypocreales taxa Fungus species
Verticillium zaregamsianum
Biology
242
51,100,867
https://en.wikipedia.org/wiki/Algestone%20acetonide
Algestone acetonide (developmental code name W-3395), also known as algestone 16α,17α-acetonide or 16α,17α-isopropylidenedioxyprogesterone, is a progestin which was never marketed. It is the acetonide cyclic ketal of algestone. Another progestin, algestone acetophenide, in contrast, has been marketed. Chemistry References Abandoned drugs Acetonides Diketones Pregnanes Progestogens Steroid cyclic ketals
Algestone acetonide
Chemistry
119
11,708,313
https://en.wikipedia.org/wiki/Sodium%20nonanoyloxybenzenesulfonate
Sodium nonanoyloxybenzenesulfonate (NOBS) is an important component of laundry detergents and bleaches. It is known as a bleach activator for active oxygen sources, allowing formulas containing hydrogen peroxide releasing chemicals (specifically sodium perborate, sodium percarbonate, sodium perphosphate, sodium persulfate, and urea peroxide) to effect bleaching at lower temperatures. Synthesis NOBS is formed by the reaction of nonanoic acid (or its esters) with phenol followed by aromatic sulfonation using SO3 to form a sulfonic acid at the para-position. Bleach activation NOBS was developed by Procter & Gamble in 1983 and was first used in American laundry detergents in 1988. NOBS is the main bleach activator used in the U.S.A. and Japan. Compared to TAED, which is the predominant bleach activator used in Europe, NOBS is efficient at much lower temperatures. At 20 °C NOBS is 100 times more soluble than TAED in water. When attacked by the perhydroxyl anion (from hydrogen peroxide), NOBS forms peroxynonanoic acid (a peroxy acid) and releases the leaving group sodium 4-hydroxybenzene sulfonate, which is an inert by-product. References Cleaning product components Benzenesulfonates Anionic surfactants Organic sodium salts Nonanoate esters
Sodium nonanoyloxybenzenesulfonate
Chemistry,Technology
318
168,651
https://en.wikipedia.org/wiki/High-performance%20liquid%20chromatography
High-performance liquid chromatography (HPLC), formerly referred to as high-pressure liquid chromatography, is a technique in analytical chemistry used to separate, identify, and quantify specific components in mixtures. The mixtures can originate from food, chemicals, pharmaceuticals, biological, environmental and agriculture, etc., which have been dissolved into liquid solutions. It relies on high pressure pumps, which deliver mixtures of various solvents, called the mobile phase, which flows through the system, collecting the sample mixture on the way, delivering it into a cylinder, called the column, filled with solid particles, made of adsorbent material, called the stationary phase. Each component in the sample interacts differently with the adsorbent material, causing different migration rates for each component. These different rates lead to separation as the species flow out of the column into a specific detector such as UV detectors. The output of the detector is a graph, called a chromatogram. Chromatograms are graphical representations of the signal intensity versus time or volume, showing peaks, which represent components of the sample. Each sample appears in its respective time, called its retention time, having area proportional to its amount. HPLC is widely used for manufacturing (e.g., during the production process of pharmaceutical and biological products), legal (e.g., detecting performance enhancement drugs in urine), research (e.g., separating the components of a complex biological sample, or of similar synthetic chemicals from each other), and medical (e.g., detecting vitamin D levels in blood serum) purposes. Chromatography can be described as a mass transfer process involving adsorption and/or partition. As mentioned, HPLC relies on pumps to pass a pressurized liquid and a sample mixture through a column filled with adsorbent, leading to the separation of the sample components. The active component of the column, the adsorbent, is typically a granular material made of solid particles (e.g., silica, polymers, etc.), 1.5–50 μm in size, on which various reagents can be bonded. The components of the sample mixture are separated from each other due to their different degrees of interaction with the adsorbent particles. The pressurized liquid is typically a mixture of solvents (e.g., water, buffers, acetonitrile and/or methanol) and is referred to as a "mobile phase". Its composition and temperature play a major role in the separation process by influencing the interactions taking place between sample components and adsorbent. These interactions are physical in nature, such as hydrophobic (dispersive), dipole–dipole and ionic, most often a combination. Operation The liquid chromatograph is complex and has sophisticated and delicate technology. In order to properly operate the system, there should be a minimum basis for understanding of how the device performs the data processing to avoid incorrect data and distorted results. HPLC is distinguished from traditional ("low pressure") liquid chromatography because operational pressures are significantly higher (around 50–1400 bar), while ordinary liquid chromatography typically relies on the force of gravity to pass the mobile phase through the packed column. Due to the small sample amount separated in analytical HPLC, typical column dimensions are 2.1–4.6 mm diameter, and 30–250 mm length. Also HPLC columns are made with smaller adsorbent particles (1.5–50 μm in average particle size). This gives HPLC superior resolving power (the ability to distinguish between compounds) when separating mixtures, which makes it a popular chromatographic technique. The schematic of an HPLC instrument typically includes solvents' reservoirs, one or more pumps, a solvent-degasser, a sampler, a column, and a detector. The solvents are prepared in advance according to the needs of the separation, they pass through the degasser to remove dissolved gasses, mixed to become the mobile phase, then flow through the sampler, which brings the sample mixture into the mobile phase stream, which then carries it into the column. The pumps deliver the desired flow and composition of the mobile phase through the stationary phase inside the column, then directly into a flow-cell inside the detector. The detector generates a signal proportional to the amount of sample component emerging from the column, hence allowing for quantitative analysis of the sample components. The detector also marks the time of emergence, the retention time, which serves for initial identification of the component. More advanced detectors, provide also additional information, specific to the analyte's characteristics, such as UV-VIS spectrum or mass spectrum, which can provide insight on its structural features. These detectors are in common use, such as UV/Vis, photodiode array (PDA) / diode array detector and mass spectrometry detector. A digital microprocessor and user software control the HPLC instrument and provide data analysis. Some models of mechanical pumps in an HPLC instrument can mix multiple solvents together at a ratios changing in time, generating a composition gradient in the mobile phase. Most HPLC instruments also have a column oven that allows for adjusting the temperature at which the separation is performed. The sample mixture to be separated and analyzed is introduced, in a discrete small volume (typically microliters), into the stream of mobile phase percolating through the column. The components of the sample move through the column, each at a different velocity, which are a function of specific physical interactions with the adsorbent, the stationary phase. The velocity of each component depends on its chemical nature, on the nature of the stationary phase (inside the column) and on the composition of the mobile phase. The time at which a specific analyte elutes (emerges from the column) is called its retention time. The retention time, measured under particular conditions, is an identifying characteristic of a given analyte. Many different types of columns are available, filled with adsorbents varying in particle size, porosity, and surface chemistry. The use of smaller particle size packing materials requires the use of higher operational pressure ("backpressure") and typically improves chromatographic resolution (the degree of peak separation between consecutive analytes emerging from the column). Sorbent particles may be ionic, hydrophobic or polar in nature. The most common mode of liquid chromatography is reversed phase, whereby the mobile phases used, include any miscible combination of water or buffers with various organic solvents (the most common are acetonitrile and methanol). Some HPLC techniques use water-free mobile phases (see normal-phase chromatography below). The aqueous component of the mobile phase may contain acids (such as formic, phosphoric or trifluoroacetic acid) or salts to assist in the separation of the sample components. The composition of the mobile phase may be kept constant ("isocratic elution mode") or varied ("gradient elution mode") during the chromatographic analysis. Isocratic elution is typically effective in the separation of simple mixtures. Gradient elution is required for complex mixtures, with varying interactions with the stationary and mobile phases. This is the reason why in gradient elution the composition of the mobile phase is varied typically from low to high eluting strength. The eluting strength of the mobile phase is reflected by analyte retention times, as the high eluting strength speeds up the elution (resulting in shortening of retention times). For example, a typical gradient profile in reversed phase chromatography for might start at 5% acetonitrile (in water or aqueous buffer) and progress linearly to 95% acetonitrile over 5–25 minutes. Periods of constant mobile phase composition (plateau) may be also part of a gradient profile. For example, the mobile phase composition may be kept constant at 5% acetonitrile for 1–3 min, followed by a linear change up to 95% acetonitrile. The chosen composition of the mobile phase depends on the intensity of interactions between various sample components ("analytes") and stationary phase (e.g., hydrophobic interactions in reversed-phase HPLC). Depending on their affinity for the stationary and mobile phases, analytes partition between the two during the separation process taking place in the column. This partitioning process is similar to that which occurs during a liquid–liquid extraction but is continuous, not step-wise. In the example using a water/acetonitrile gradient, the more hydrophobic components will elute (come off the column) later, then, once the mobile phase gets richer in acetonitrile (i.e., in a mobile phase becomes higher eluting solution), their elution speeds up. The choice of mobile phase components, additives (such as salts or acids) and gradient conditions depends on the nature of the column and sample components. Often a series of trial runs is performed with the sample in order to find the HPLC method which gives adequate separation. History and development Prior to HPLC, scientists used benchtop column liquid chromatographic techniques. Liquid chromatographic systems were largely inefficient due to the flow rate of solvents being dependent on gravity. Separations took many hours, and sometimes days to complete. Gas chromatography (GC) at the time was more powerful than liquid chromatography (LC), however, it was obvious that gas phase separation and analysis of very polar high molecular weight biopolymers was impossible. GC was ineffective for many life science and health applications for biomolecules, because they are mostly non-volatile and thermally unstable at the high temperatures of GC. As a result, alternative methods were hypothesized which would soon result in the development of HPLC. Following on the seminal work of Martin and Synge in 1941, it was predicted by Calvin Giddings, Josef Huber, and others in the 1960s that LC could be operated in the high-efficiency mode by reducing the packing-particle diameter substantially below the typical LC (and GC) level of 150 μm and using pressure to increase the mobile phase velocity. These predictions underwent extensive experimentation and refinement throughout the 60s into the 70s until these very days. Early developmental research began to improve LC particles, for example the historic Zipax, a superficially porous particle. The 1970s brought about many developments in hardware and instrumentation. Researchers began using pumps and injectors to make a rudimentary design of an HPLC system. Gas amplifier pumps were ideal because they operated at constant pressure and did not require leak-free seals or check valves for steady flow and good quantitation. Hardware milestones were made at Dupont IPD (Industrial Polymers Division) such as a low-dwell-volume gradient device being utilized as well as replacing the septum injector with a loop injection valve. While instrumentation developments were important, the history of HPLC is primarily about the history and evolution of particle technology. After the introduction of porous layer particles, there has been a steady trend to reduced particle size to improve efficiency. However, by decreasing particle size, new problems arose. The practical disadvantages stem from the excessive pressure drop needed to force mobile fluid through the column and the difficulty of preparing a uniform packing of extremely fine materials. Every time particle size is reduced significantly, another round of instrument development usually must occur to handle the pressure. Types Partition chromatography Partition chromatography was one of the first kinds of chromatography that chemists developed, and is barely used these days. The partition coefficient principle has been applied in paper chromatography, thin layer chromatography, gas phase and liquid–liquid separation applications. The 1952 Nobel Prize in chemistry was earned by Archer John Porter Martin and Richard Laurence Millington Synge for their development of the technique, which was used for their separation of amino acids. Partition chromatography uses a retained solvent, on the surface or within the grains or fibers of an "inert" solid supporting matrix as with paper chromatography; or takes advantage of some coulombic and/or hydrogen donor interaction with the stationary phase. Analyte molecules partition between a liquid stationary phase and the eluent. Just as in hydrophilic interaction chromatography (HILIC; a sub-technique within HPLC), this method separates analytes based on differences in their polarity. HILIC most often uses a bonded polar stationary phase and a mobile phase made primarily of acetonitrile with water as the strong component. Partition HPLC has been used historically on unbonded silica or alumina supports. Each works effectively for separating analytes by relative polar differences. HILIC bonded phases have the advantage of separating acidic, basic and neutral solutes in a single chromatographic run. The polar analytes diffuse into a stationary water layer associated with the polar stationary phase and are thus retained. The stronger the interactions between the polar analyte and the polar stationary phase (relative to the mobile phase) the longer the elution time. The interaction strength depends on the functional groups part of the analyte molecular structure, with more polarized groups (e.g., hydroxyl-) and groups capable of hydrogen bonding inducing more retention. Coulombic (electrostatic) interactions can also increase retention. Use of more polar solvents in the mobile phase will decrease the retention time of the analytes, whereas more hydrophobic solvents tend to increase retention times. Normal–phase chromatography Normal–phase chromatography was one of the first kinds of HPLC that chemists developed, but has decreased in use over the last decades. Also known as normal-phase HPLC (NP-HPLC), this method separates analytes based on their affinity for a polar stationary surface such as silica; hence it is based on analyte ability to engage in polar interactions (such as hydrogen-bonding or dipole-dipole type of interactions) with the sorbent surface. NP-HPLC uses a non-polar, non-aqueous mobile phase (e.g., chloroform), and works effectively for separating analytes readily soluble in non-polar solvents. The analyte associates with and is retained by the polar stationary phase. Adsorption strengths increase with increased analyte polarity. The interaction strength depends not only on the functional groups present in the structure of the analyte molecule, but also on steric factors. The effect of steric hindrance on interaction strength allows this method to resolve (separate) structural isomers. The use of more polar solvents in the mobile phase will decrease the retention time of analytes, whereas more hydrophobic solvents tend to induce slower elution (increased retention times). Very polar solvents such as traces of water in the mobile phase tend to adsorb to the solid surface of the stationary phase forming a stationary bound (water) layer which is considered to play an active role in retention. This behavior is somewhat peculiar to normal phase chromatography because it is governed almost exclusively by an adsorptive mechanism (i.e., analytes interact with a solid surface rather than with the solvated layer of a ligand attached to the sorbent surface; see also reversed-phase HPLC below). Adsorption chromatography is still somewhat used for structural isomer separations in both column and thin-layer chromatography formats on activated (dried) silica or alumina supports. Partition- and NP-HPLC fell out of favor in the 1970s with the development of reversed-phase HPLC because of poor reproducibility of retention times due to the presence of a water or protic organic solvent layer on the surface of the silica or alumina chromatographic media. This layer changes with any changes in the composition of the mobile phase (e.g., moisture level) causing drifting retention times. Recently, partition chromatography has become popular again with the development of Hilic bonded phases which demonstrate improved reproducibility, and due to a better understanding of the range of usefulness of the technique. Displacement chromatography The use of displacement chromatography is rather limited, and is mostly used for preparative chromatography. The basic principle is based on a molecule with a high affinity for the chromatography matrix (the displacer) which is used to compete effectively for binding sites, and thus displace all molecules with lesser affinities. There are distinct differences between displacement and elution chromatography. In elution mode, substances typically emerge from a column in narrow, Gaussian peaks. Wide separation of peaks, preferably to baseline, is desired in order to achieve maximum purification. The speed at which any component of a mixture travels down the column in elution mode depends on many factors. But for two substances to travel at different speeds, and thereby be resolved, there must be substantial differences in some interaction between the biomolecules and the chromatography matrix. Operating parameters are adjusted to maximize the effect of this difference. In many cases, baseline separation of the peaks can be achieved only with gradient elution and low column loadings. Thus, two drawbacks to elution mode chromatography, especially at the preparative scale, are operational complexity, due to gradient solvent pumping, and low throughput, due to low column loadings. Displacement chromatography has advantages over elution chromatography in that components are resolved into consecutive zones of pure substances rather than "peaks". Because the process takes advantage of the nonlinearity of the isotherms, a larger column feed can be separated on a given column with the purified components recovered at significantly higher concentration. Reversed-phase liquid chromatography (RP-LC) Reversed phase HPLC (RP-HPLC) is the most widespread mode of chromatography. It has a non-polar stationary phase and an aqueous, moderately polar mobile phase. In the reversed phase methods, the substances are retained in the system the more hydrophobic they are. For the retention of organic materials, the stationary phases, packed inside the columns, are consisted mainly of porous granules of silica gel in various shapes, mainly spherical, at different  diameters (1.5, 2, 3, 5, 7, 10 um), with varying pore diameters (60, 100, 150, 300, A), on whose surface are chemically bound various hydrocarbon ligands such as C3, C4, C8, C18. There are also polymeric hydrophobic particles that serve as stationary phases, when solutions at extreme pH are needed, or hybrid silica, polymerized with organic substances. The longer the hydrocarbon ligand on the stationary phase, the longer the sample components can be retained. Most of the current methods of separation of biomedical materials use C-18 type of columns, sometimes called by a trade names such as ODS (octadecylsilane) or RP-18 (Reversed Phase 18). The most common RP stationary phases are based on a silica support, which is surface-modified by bonding RMe2SiCl, where R is a straight chain alkyl group such as C18H37 or C8H17. With such stationary phases, retention time is longer for lipophylic molecules, whereas polar molecules elute more readily (emerge early in the analysis). A chromatographer can increase retention times by adding more water to the mobile phase, thereby making the interactions of the hydrophobic analyte with the hydrophobic stationary phase relatively stronger. Similarly, an investigator can decrease retention time by adding more organic solvent to the mobile phase. RP-HPLC is so commonly used among the biologists and life science users, therefore it is often incorrectly referred to as just "HPLC" without further specification. The pharmaceutical industry also regularly employs RP-HPLC to qualify drugs before their release. RP-HPLC operates on the principle of hydrophobic interactions, which originates from the high symmetry in the dipolar water structure and plays the most important role in all processes in life science. RP-HPLC allows the measurement of these interactive forces. The binding of the analyte to the stationary phase is proportional to the contact surface area around the non-polar segment of the analyte molecule upon association with the ligand on the stationary phase. This solvophobic effect is dominated by the force of water for "cavity-reduction" around the analyte and the C18-chain versus the complex of both. The energy released in this process is proportional to the surface tension of the eluent (water: 7.3 J/cm2, methanol: 2.2 J/cm2) and to the hydrophobic surface of the analyte and the ligand respectively. The retention can be decreased by adding a less polar solvent (methanol, acetonitrile) into the mobile phase to reduce the surface tension of water. Gradient elution uses this effect by automatically reducing the polarity and the surface tension of the aqueous mobile phase during the course of the analysis. Structural properties of the analyte molecule can play an important role in its retention characteristics. In theory, an analyte with a larger hydrophobic surface area (C–H, C–C, and generally non-polar atomic bonds, such as S-S and others) can be retained longer as it does not interact with the water structure. On the other hand, analytes with higher polar surface area (as a result of the presence of polar groups, such as -OH, -NH2, COO− or -NH3+ in their structure) are less retained, as they are better integrated into water. The interactions with the stationary phase can also affected by steric effects, or exclusion effects, whereby a component of very large molecule may have only restricted access to the pores of the stationary phase, where the interactions with surface ligands (alkyl chains) take place. Such surface hindrance typically results in less retention. Retention time increases with more hydrophobic (non-polar) surface area of the molecules. For example, branched chain compounds can elute more rapidly than their corresponding linear isomers because their overall surface area is lower. Similarly organic compounds with single C–C bonds frequently elute later than those with a C=C or even triple bond, as the double or triple bond makes the molecule more compact than a single C–C bond. Another important factor is the mobile phase pH since it can change the hydrophobic character of the ionizable analyte. For this reason most methods use a buffering agent, such as sodium phosphate, to control the pH. Buffers serve multiple purposes: control of pH which affects the ionization state of the ionizable analytes, affect the charge upon the ionizable silica surface of the stationary phase in between the bonded phase linands, and in some cases even act as ion pairing agents to neutralize analyte charge. Ammonium formate is commonly added in mass spectrometry to improve detection of certain analytes by the formation of analyte-ammonium adducts. A volatile organic acid such as acetic acid, or most commonly formic acid, is often added to the mobile phase if mass spectrometry is used to analyze the column effluents. Trifluoroacetic acid (TFA) as additive to the mobile phase is widely used for complex mixtures of biomedical samples, mostly peptides and proteins, using mostly UV based detectors. They are rarely used in mass spectrometry methods, due to residues it can leave in the detector and solvent delivery system, which interfere with the analysis and detection. However, TFA can be highly effective in improving retention of analytes such as carboxylic acids, in applications utilizing other detectors such as UV-VIS, as it is a fairly strong organic acid. The effects of acids and buffers vary by application but generally improve chromatographic resolution when dealing with ionizable components. Reversed phase columns are quite difficult to damage compared to normal silica columns, thanks to the shielding effect of the bonded hydrophobic ligands; however, most reversed phase columns consist of alkyl derivatized silica particles, and are prone to hydrolysis of the silica at extreme pH conditions in the mobile phase. Most types of RP columns should not be used with aqueous bases as these will hydrolyze the underlying silica particle and dissolve it. There are selected brands of hybrid or enforced silica based particles of RP columns which can be used at extreme pH conditions. The use of extreme acidic conditions is also not recommended, as they also might hydrolyzed as well as corrode the inside walls of the metallic parts of the HPLC equipment. As a rule, in most cases RP-HPLC columns should be flushed with clean solvent after use to remove residual acids or buffers, and stored in an appropriate composition of solvent. Some biomedical applications require non metallic environment for the optimal separation. For such sensitive cases there is a test for the metal content of a column is to inject a sample which is a mixture of 2,2'- and 4,4'-bipyridine. Because the 2,2'-bipy can chelate the metal, the shape of the peak for the 2,2'-bipy will be distorted (tailed) when metal ions are present on the surface of the silica... Size-exclusion chromatography Size-exclusion chromatography (SEC) separates polymer molecules and biomolecules based on differences in their molecular size (actually by a particle's Stokes radius). The separation process is based on the ability of sample molecules to permeate through the pores of gel spheres, packed inside the column, and is dependent on the relative size of analyte molecules and the respective pore size of the absorbent. The process also relies on the absence of any interactions with the packing material surface. Two types of SEC are usually termed: Gel permeation chromatography (GPC)—separation of synthetic polymers (aqueous or organic soluble). GPC is a powerful technique for polymer characterization using primarily organic solvents. Gel filtration chromatography (GFC)—separation of water-soluble biopolymers. GFC uses primarily aqueous solvents (typically for aqueous soluble biopolymers, such as proteins, etc.). The separation principle in SEC is based on the fully, or partially penetrating of the high molecular weight substances of the sample into the porous stationary-phase particles during their transport through column. The mobile-phase eluent is selected in such a way that it totally prevents interactions with the stationary phase's surface. Under these conditions, the smaller the size of the molecule, the more it is able to penetrate inside the pore space and the movement through the column takes longer. On the other hand, the bigger the molecular size, the higher the probability the molecule will not fully penetrate the pores of the stationary phase, and even travel around them, thus, will be eluted earlier. The molecules are separated in order of decreasing molecular weight, with the largest molecules eluting from the column first and smaller molecules eluting later. Molecules larger than the pore size do not enter the pores at all, and elute together as the first peak in the chromatogram and this is called total exclusion volume which defines the exclusion limit for a particular column. Small molecules will permeate fully through the pores of the stationary phase particles and will be eluted last, marking the end of the chromatogram, and may appear as a total penetration marker. In biomedical sciences it is generally considered as a low resolution chromatography and thus it is often reserved for the final, "polishing" step of the purification. It is also useful for determining the tertiary structure and quaternary structure of purified proteins. SEC is used primarily for the analysis of large molecules such as proteins or polymers. SEC works also in a preparative way by trapping the smaller molecules in the pores of a particles. The larger molecules simply pass by the pores as they are too large to enter the pores. Larger molecules therefore flow through the column quicker than smaller molecules: that is, the smaller the molecule, the longer the retention time. This technique is widely used for the molecular weight determination of polysaccharides. SEC is the official technique (suggested by European pharmacopeia) for the molecular weight comparison of different commercially available low-molecular weight heparins. Ion-exchange chromatography Ion-exchange chromatography (IEC) or ion chromatography (IC) is an analytical technique for the separation and determination of ionic solutes in aqueous samples from environmental and industrial origins such as metal industry, industrial waste water, in biological systems, pharmaceutical samples, food, etc. Retention is based on the attraction between solute ions and charged sites bound to the stationary phase. Solute ions charged the same as the ions on the column are repulsed and elute without retention, while solute ions charged oppositely to the charged sites of the column are retained on it. Solute ions that are retained on the column can be eluted from it by changing the mobile phase composition, such as increasing its salt concentration and pH or increasing the column temperature, etc. Types of ion exchangers include polystyrene resins, cellulose and dextran ion exchangers (gels), and controlled-pore glass or porous silica gel. Polystyrene resins allow cross linkage, which increases the stability of the chain. Higher cross linkage reduces swerving, which increases the equilibration time and ultimately improves selectivity. Cellulose and dextran ion exchangers possess larger pore sizes and low charge densities making them suitable for protein separation. In general, ion exchangers favor the binding of ions of higher charge and smaller radius. An increase in counter ion (with respect to the functional groups in resins) concentration reduces the retention time, as it creates a strong competition with the solute ions. A decrease in pH reduces the retention time in cation exchange while an increase in pH reduces the retention time in anion exchange. By lowering the pH of the solvent in a cation exchange column, for instance, more hydrogen ions are available to compete for positions on the anionic stationary phase, thereby eluting weakly bound cations. This form of chromatography is widely used in the following applications: water purification, preconcentration of trace components, ligand-exchange chromatography, ion-exchange chromatography of proteins, high-pH anion-exchange chromatography of carbohydrates and oligosaccharides, and others. Bioaffinity chromatography High performance affinity chromatography (HPAC) works by passing a sample solution through a column packed with a stationary phase that contains an immobilized biologically active ligand. The ligand is in fact a substrate that has a specific binding affinity for the target molecule in the sample solution. The target molecule binds to the ligand, while the other molecules in the sample solution pass through the column, having little or no retention. The target molecule is then eluted from the column using a suitable elution buffer. This chromatographic process relies on the capability of the bonded active substances to form stable, specific, and reversible complexes thanks to their biological recognition of certain specific sample components. The formation of these complexes involves the participation of common molecular forces such as the Van der Waals interaction, electrostatic interaction, dipole-dipole interaction, hydrophobic interaction, and the hydrogen bond. An efficient, biospecific bond is formed by a simultaneous and concerted action of several of these forces in the complementary binding sites. Aqueous normal-phase chromatography Aqueous normal-phase chromatography (ANP) is also called hydrophilic interaction liquid chromatography (HILIC). This is a chromatographic technique which encompasses the mobile phase region between reversed-phase chromatography (RP) and organic normal phase chromatography (ONP). HILIC is used to achieve unique selectivity for hydrophilic compounds, showing normal phase elution order, using "reversed-phase solvents", i.e., relatively polar mostly non-aqueous solvents in the mobile phase. Many biological molecules, especially those found in biological fluids, are small polar compounds that do not retain well by reversed phase-HPLC. This has made hydrophilic interaction LC (HILIC) an attractive alternative and useful approach for analysis of polar molecules. Additionally, because HILIC is routinely used with traditional aqueous mixtures with polar organic solvents such as ACN and methanol, it can be easily coupled to MS. Isocratic and gradient elution A separation in which the mobile phase composition remains constant throughout the procedure is termed isocratic (meaning constant composition). The word was coined by Csaba Horvath who was one of the pioneers of HPLC. The mobile phase composition does not have to remain constant. A separation in which the mobile phase composition is changed during the separation process is described as a gradient elution. For example, a gradient can start at 10% methanol in water, and end at 90% methanol in water after 20 minutes. The two components of the mobile phase are typically termed "A" and "B"; A is the "weak" solvent which allows the solute to elute only slowly, while B is the "strong" solvent which rapidly elutes the solutes from the column. In reversed-phase chromatography, solvent A is often water or an aqueous buffer, while B is an organic solvent miscible with water, such as acetonitrile, methanol, THF, or isopropanol. In isocratic elution, peak width increases with retention time linearly according to the equation for N, the number of theoretical plates. This can be a major disadvantage when analyzing a sample that contains analytes with a wide range of retention factors. Using a weaker mobile phase, the runtime is lengthened and results in slowly eluting peaks to be broad, leading to reduced sensitivity. A stronger mobile phase would improve issues of runtime and broadening of later peaks but results in diminished peak separation, especially for quickly eluting analytes which may have insufficient time to fully resolve. This issue is addressed through the changing mobile phase composition of gradient elution. By starting from a weaker mobile phase and strengthening it during the runtime, gradient elution decreases the retention of the later-eluting components so that they elute faster, giving narrower (and taller) peaks for most components, while also allowing for the adequate separation of earlier-eluting components. This also improves the peak shape for tailed peaks, as the increasing concentration of the organic eluent pushes the tailing part of a peak forward. This also increases the peak height (the peak looks "sharper"), which is important in trace analysis. The gradient program may include sudden "step" increases in the percentage of the organic component, or different slopes at different times – all according to the desire for optimum separation in minimum time. In isocratic elution, the retention order does not change if the column dimensions (length and inner diameter) change – that is, the peaks elute in the same order. In gradient elution, however, the elution order may change as the dimensions or flow rate change. if they are no scaled down or up according to the change The driving force in reversed phase chromatography originates in the high order of the water structure. The role of the organic component of the mobile phase is to reduce this high order and thus reduce the retarding strength of the aqueous component. Parameters Theoretical The theory of high performance liquid chromatography-HPLC is, at its core, the same as general chromatography theory. This theory has been used as the basis for system-suitability tests, as can be seen in the USP Pharmacopeia, which are a set of quantitative criteria, which test the suitability of the HPLC system to the required analysis at any step of it. This relation is also represented as a normalized unit-less factor known as the retention factor, or retention parameter, which is the experimental measurement of the capacity ratio, as shown in the Figure of Performance Criteria as well. tR is the retention time of the specific component and t0 is the time it takes for a non-retained substance to elute through the system without any retention, thus it is called the Void Time. The ratio between the retention factors, k', of every two adjacent peaks in the chromatogram is used in the evaluation of the degree of separation between them, and is called selectivity factor, α, as shown in the Performance Criteria graph. The plate count N as a criterion for system efficiency was developed for isocratic conditions, i.e., a constant mobile phase composition throughout the run. In gradient conditions, where the mobile phase changes with time during the chromatographic run, it is more appropriate to use the parameter peak capacity Pc as a measure for the system efficiency. The definition of peak capacity in chromatography is the number of peaks that can be separated within a retention window for a specific pre-defined resolution factor, usually ~1. It could also be envisioned as the runtime measured in number of peaks' average widths. The equation is shown in the Figure of the performance criteria. In this equation tg is the gradient time and w(ave) is the average peaks width at the base. The parameters are largely derived from two sets of chromatographic theory: plate theory (as part of partition chromatography), and the rate theory of chromatography / Van Deemter equation. Of course, they can be put in practice through analysis of HPLC chromatograms, although rate theory is considered the more accurate theory. They are analogous to the calculation of retention factor for a paper chromatography separation, but describes how well HPLC separates a mixture into two or more components that are detected as peaks (bands) on a chromatogram. The HPLC parameters are the: efficiency factor(N), the retention factor (kappa prime), and the separation factor (alpha). Together the factors are variables in a resolution equation, which describes how well two components' peaks separated or overlapped each other. These parameters are mostly only used for describing HPLC reversed phase and HPLC normal phase separations, since those separations tend to be more subtle than other HPLC modes (e.g., ion exchange and size exclusion). Void volume is the amount of space in a column that is occupied by solvent. It is the space within the column that is outside of the column's internal packing material. Void volume is measured on a chromatogram as the first component peak detected, which is usually the solvent that was present in the sample mixture; ideally the sample solvent flows through the column without interacting with the column, but is still detectable as distinct from the HPLC solvent. The void volume is used as a correction factor. Efficiency factor (N) practically measures how sharp component peaks on the chromatogram are, as ratio of the component peak's area ("retention time") relative to the width of the peaks at their widest point (at the baseline). Peaks that are tall, sharp, and relatively narrow indicate that separation method efficiently removed a component from a mixture; high efficiency. Efficiency is very dependent upon the HPLC column and the HPLC method used. Efficiency factor is synonymous with plate number, and the 'number of theoretical plates'. Retention factor (kappa prime) measures how long a component of the mixture stuck to the column, measured by the area under the curve of its peak in a chromatogram (since HPLC chromatograms are a function of time). Each chromatogram peak will have its own retention factor (e.g., kappa1 for the retention factor of the first peak). This factor may be corrected for by the void volume of the column. Separation factor (alpha) is a relative comparison on how well two neighboring components of the mixture were separated (i.e., two neighboring bands on a chromatogram). This factor is defined in terms of a ratio of the retention factors of a pair of neighboring chromatogram peaks, and may also be corrected for by the void volume of the column. The greater the separation factor value is over 1.0, the better the separation, until about 2.0 beyond which an HPLC method is probably not needed for separation. Resolution equations relate the three factors such that high efficiency and separation factors improve the resolution of component peaks in an HPLC separation. Internal diameter The internal diameter (ID) of an HPLC column is an important parameter. It can influence the detection response when reduced due to the reduced lateral diffusion of the solute band. It can also affect the separation selectivity, when flow rate and injection volumes are not scaled down or up proportionally to the smaller or larger diameter used, both in the isocratic and in gradient modes. It determines the quantity of analyte that can be loaded onto the column. Larger diameter columns are usually seen in preparative applications, such as the purification of a drug product for later use. Low-ID columns have improved sensitivity and lower solvent consumption in the recent ultra-high performance liquid chromatography (UHPLC). Larger ID columns (over 10 mm) are used to purify usable amounts of material because of their large loading capacity. Analytical scale columns (4.6 mm) have been the most common type of columns, though narrower columns are rapidly gaining in popularity. They are used in traditional quantitative analysis of samples and often use a UV-Vis absorbance detector. Narrow-bore columns (1–2 mm) are used for applications when more sensitivity is desired either with special UV-vis detectors, fluorescence detection or with other detection methods like liquid chromatography-mass spectrometry Capillary columns (under 0.3 mm) are used almost exclusively with alternative detection means such as mass spectrometry. They are usually made from fused silica capillaries, rather than the stainless steel tubing that larger columns employ. Particle size Most traditional HPLC is performed with the stationary phase attached to the outside of small spherical silica particles (very small beads). These particles come in a variety of sizes with 5 μm beads being the most common. Smaller particles generally provide more surface area and better separations, but the pressure required for optimum linear velocity increases by the inverse of the particle diameter squared. According to the equations of the column velocity, efficiency and backpressure, reducing the particle diameter by half and keeping the size of the column the same, will double the column velocity and efficiency; but four times increase the backpressure. And the small particles HPLC also can decrease the width broadening. Larger particles are used in preparative HPLC (column diameters 5 cm up to >30 cm) and for non-HPLC applications such as solid-phase extraction. Pore size Many stationary phases are porous to provide greater surface area. Small pores provide greater surface area while larger pore size has better kinetics, especially for larger analytes. For example, a protein which is only slightly smaller than a pore might enter the pore but does not easily leave once inside. Pump pressure Pumps vary in pressure capacity, but their performance is measured on their ability to yield a consistent and reproducible volumetric flow rate. Pressure may reach as high as 60 MPa (6000 lbf/in2), or about 600 atmospheres. Modern HPLC systems have been improved to work at much higher pressures, and therefore are able to use much smaller particle sizes in the columns (<2 μm). These "ultra high performance liquid chromatography" systems or UHPLCs, which could also be known as ultra high pressure chromatography systems, can work at up to 120 MPa (17,405 lbf/in2), or about 1200 atmospheres. The term "UPLC" is a trademark of the Waters Corporation, but is sometimes used to refer to the more general technique of UHPLC. Detectors HPLC detectors fall into two main categories: universal or selective. Universal detectors typically measure a bulk property (e.g., refractive index) by measuring a difference of a physical property between the mobile phase and mobile phase with solute while selective detectors measure a solute property (e.g., UV-Vis absorbance) by simply responding to the physical or chemical property of the solute. HPLC most commonly uses a UV-Vis absorbance detector; however, a wide range of other chromatography detectors can be used. A universal detector that complements UV-Vis absorbance detection is the charged aerosol detector (CAD). A kind of commonly utilized detector includes refractive index detectors, which provide readings by measuring the changes in the refractive index of the eluant as it moves through the flow cell. In certain cases, it is possible to use multiple detectors, for example LCMS normally combines UV-Vis with a mass spectrometer. When used with an electrochemical detector (ECD) the HPLC-ECD selectively detects neurotransmitters such as: norepinephrine, dopamine, serotonin, glutamate, GABA, acetylcholine and others in neurochemical analysis research applications. The HPLC-ECD detects neurotransmitters to the femtomolar range. Other methods to detect neurotransmitters include liquid chromatography-mass spectrometry, ELISA, or radioimmunoassays. Autosamplers Large numbers of samples can be automatically injected onto an HPLC system, by the use of HPLC autosamplers. In addition, HPLC autosamplers have an injection volume and technique which is exactly the same for each injection, consequently they provide a high degree of injection volume precision. It is possible to enable sample stirring within the sampling-chamber, thus promoting homogeneity. Applications Manufacturing HPLC has many applications in both laboratory and clinical science. It is a common technique used in pharmaceutical development, as it is a dependable way to obtain and ensure product purity. While HPLC can produce extremely high quality (pure) products, it is not always the primary method used in the production of bulk drug materials. According to the European pharmacopoeia, HPLC is used in only 15.5% of syntheses. However, it plays a role in 44% of syntheses in the United States pharmacopoeia. This could possibly be due to differences in monetary and time constraints, as HPLC on a large scale can be an expensive technique. An increase in specificity, precision, and accuracy that occurs with HPLC unfortunately corresponds to an increase in cost. Legal This technique is also used for detection of illicit drugs in various samples. The most common method of drug detection has been an immunoassay. This method is much more convenient. However, convenience comes at the cost of specificity and coverage of a wide range of drugs, therefore, HPLC has been used as well as an alternative method. As HPLC is a method of determining (and possibly increasing) purity, using HPLC alone in evaluating concentrations of drugs was somewhat insufficient. Therefore, HPLC in this context is often performed in conjunction with mass spectrometry. Using liquid chromatography-mass spectrometry (LC-MS) instead of gas chromatography-mass spectrometry (GC-MS) circumvents the necessity for derivitizing with acetylating or alkylation agents, which can be a burdensome extra step. LC-MS has been used to detect a variety of agents like doping agents, drug metabolites, glucuronide conjugates, amphetamines, opioids, cocaine, BZDs, ketamine, LSD, cannabis, and pesticides. Performing HPLC in conjunction with mass spectrometry reduces the absolute need for standardizing HPLC experimental runs. Research Similar assays can be performed for research purposes, detecting concentrations of potential clinical candidates like anti-fungal and asthma drugs. This technique is obviously useful in observing multiple species in collected samples, as well, but requires the use of standard solutions when information about species identity is sought out. It is used as a method to confirm results of synthesis reactions, as purity is essential in this type of research. However, mass spectrometry is still the more reliable way to identify species. Medical and health sciences Medical use of HPLC typically use mass spectrometer (MS) as the detector, so the technique is called LC-MS or LC-MS/MS for tandem MS, where two types of MS are operated sequentially. When the HPLC instrument is connected to more than one detector, it is called a hyphenated LC system. Pharmaceutical applications are the major users of HPLC, LC-MS and LC-MS/MS. This includes drug development and pharmacology, which is the scientific study of the effects of drugs and chemicals on living organisms, personalized medicine, public health and diagnostics. While urine is the most common medium for analyzing drug concentrations, blood serum is the sample collected for most medical analyses with HPLC. One of the most important roles of LC-MS and LC-MS/MS in the clinical lab is the Newborn Screening (NBS) for metabolic disorders and follow-up diagnostics. The infants' samples come in the shape of dried blood spot (DBS), which is simple to prepare and transport, enabling safe and accessible diagnostics, both locally and globally. Other methods of detection of molecules that are useful for clinical studies have been tested against HPLC, namely immunoassays. In one example of this, competitive protein binding assays (CPBA) and HPLC were compared for sensitivity in detection of vitamin D. Useful for diagnosing vitamin D deficiencies in children, it was found that sensitivity and specificity of this CPBA reached only 40% and 60%, respectively, of the capacity of HPLC. While an expensive tool, the accuracy of HPLC is nearly unparalleled. See also History of chromatography Capillary electrochromatography Column chromatography Csaba Horváth Ion chromatography Micellar liquid chromatography References Further reading L. R. Snyder, J.J. Kirkland, and J. W. Dolan, Introduction to Modern Liquid Chromatography, John Wiley & Sons, New York, 2009. M.W. Dong, Modern HPLC for practicing scientists. Wiley, 2006. L. R. Snyder, J.J. Kirkland, and J. L. Glajch, Practical HPLC Method Development, John Wiley & Sons, New York, 1997. S. Ahuja and H. T. Rasmussen (ed), HPLC Method Development for Pharmaceuticals, Academic Press, 2007. S. Ahuja and M.W. Dong (ed), Handbook of Pharmaceutical Analysis by HPLC, Elsevier/Academic Press, 2005. Y. V. Kazakevich and R. LoBrutto (ed.), HPLC for Pharmaceutical Scientists, Wiley, 2007. U. D. Neue, HPLC Columns: Theory, Technology, and Practice, Wiley-VCH, New York, 1997. M. C. McMaster, HPLC, a practical user's guide, Wiley, 2007. External links HPLC Chromatography Principle, Application [Basic Note] – 2020. at Rxlalit.com Hungarian inventions Chromatography Scientific techniques
High-performance liquid chromatography
Chemistry
10,890
36,626,511
https://en.wikipedia.org/wiki/6%20Equulei
6 Equulei is a probable (95% chance) astrometric binary star system in the northern constellation of Equuleus, located 380 light years from the Sun. It is barely visible to the naked eye as a dim, white-hued star with an apparent visual magnitude of 6.07. The system is moving further away from the Earth with a heliocentric radial velocity of +6.9 km/s. It forms a wide optical double with γ Equulei, at an angular separation of 336 arcseconds in 2011. The visible component is an Ap star with a stellar classification of A2Vs, matching the evolutionary state of an A-type main sequence star while displaying "sharp" absorption lines. It is an estimated 970 million years old with a projected rotational velocity of 65 km/s. The star has 2.6 times the mass of the Sun and around 1.7 times the Sun's radius. It is radiating 71 times the luminosity of the Sun from its photosphere at an effective temperature of 9,078 K. References External links frostydrew.org/stars.dc/star/id-126597/pss-obsy www.wolframalpha.com/input/?i=6+Equulei A-type main-sequence stars Ap stars Equuleus Durchmusterung objects Equulei, 06 201616 104538 8098
6 Equulei
Astronomy
302
2,827,774
https://en.wikipedia.org/wiki/Charpy%20impact%20test
In materials science, the Charpy impact test, also known as the Charpy V-notch test, is a standardized high strain rate test which determines the amount of energy absorbed by a material during fracture. Absorbed energy is a measure of the material's notch toughness. It is widely used in industry, since it is easy to prepare and conduct and results can be obtained quickly and cheaply. A disadvantage is that some results are only comparative. The test was pivotal in understanding the fracture problems of ships during World War II. The test was developed around 1900 by S. B. Russell (1898, American) and Georges Charpy (1901, French). The test became known as the Charpy test in the early 1900s due to the technical contributions and standardization efforts by Charpy. History In 1896, S. B. Russell introduced the idea of residual fracture energy and devised a pendulum fracture test. Russell's initial tests measured un-notched samples. In 1897, Frémont introduced a test to measure the same phenomenon using a spring-loaded machine. In 1901, Georges Charpy proposed a standardized method improving Russell's by introducing a redesigned pendulum and notched sample, giving precise specifications. Definition The apparatus consists of a pendulum of known mass and length that is dropped from a known height to impact a notched specimen of material. The energy transferred to the material can be inferred by comparing the difference in the height of the hammer before and after the fracture (energy absorbed by the fracture event). The notch in the sample affects the results of the impact test, thus it is necessary for the notch to be of regular dimensions and geometry. The size of the sample can also affect results, since the dimensions determine whether or not the material is in plane strain. This difference can greatly affect the conclusions made. The Standard methods for Notched Bar Impact Testing of Metallic Materials can be found in ASTM E23, ISO 148-1 or EN 10045-1 (retired and replaced with ISO 148-1), where all the aspects of the test and equipment used are described in detail. Quantitative results The quantitative result of the impact tests the energy needed to fracture a material and can be used to measure the toughness of the material. There is a connection to the yield strength but it cannot be expressed by a standard formula. Also, the strain rate may be studied and analyzed for its effect on fracture. The ductile-brittle transition temperature (DBTT) may be derived from the temperature where the energy needed to fracture the material drastically changes. However, in practice there is no sharp transition and it is difficult to obtain a precise transition temperature (it is really a transition region). An exact DBTT may be empirically derived in many ways: a specific absorbed energy, change in aspect of fracture (such as 50% of the area is cleavage), etc. Qualitative results The qualitative results of the impact test can be used to determine the ductility of a material. If the material breaks on a flat plane, the fracture was brittle, and if the material breaks with jagged edges or shear lips, then the fracture was ductile. Usually, a material does not break in just one way or the other and thus comparing the jagged to flat surface areas of the fracture will give an estimate of the percentage of ductile and brittle fracture. Sample sizes According to ASTM A370, the standard specimen size for Charpy impact testing is 10 mm × 10 mm × 55 mm. Subsize specimen sizes are: 10 mm × 7.5 mm × 55 mm, 10 mm × 6.7 mm × 55 mm, 10 mm × 5 mm × 55 mm, 10 mm × 3.3 mm × 55 mm, 10 mm × 2.5 mm × 55 mm. Details of specimens as per ASTM A370 (Standard Test Method and Definitions for Mechanical Testing of Steel Products). According to EN 10045-1 (retired and replaced with ISO 148), standard specimen sizes are 10 mm × 10 mm × 55 mm. Subsize specimens are: 10 mm × 7.5 mm × 55 mm and 10 mm × 5 mm × 55 mm. According to ISO 148, standard specimen sizes are 10 mm × 10 mm × 55 mm. Subsize specimens are: 10 mm × 7.5 mm × 55 mm, 10 mm × 5 mm × 55 mm and 10 mm × 2.5 mm × 55 mm. According to MPIF Standard 40, the standard unnotched specimen size is 10 mm (±0.125 mm) x 10 mm (±0.125 mm) x 55 mm (±2.5 mm). Impact test results on low- and high-strength materials The impact energy of low-strength metals that do not show a change of fracture mode with temperature, is usually high and insensitive to temperature. For these reasons, impact tests are not widely used for assessing the fracture-resistance of low-strength materials whose fracture modes remain unchanged with temperature. Impact tests typically show a ductile-brittle transition for high-strength materials that do exhibit change in fracture mode with temperature such as body-centered cubic (BCC) transition metals. Impact tests on natural materials (can be considered as low-strength), such as wood, are used to study the material toughness and are subjected to a number of issues that include the interaction between the pendulum and a specimen as well as higher modes of vibration and multiple contacts between pendulum tup and the specimen. Generally, high-strength materials have low impact energies which attest to the fact that fractures easily initiate and propagate in high-strength materials. The impact energies of high-strength materials other than steels or BCC transition metals are usually insensitive to temperature. High-strength BCC steels display a wider variation of impact energy than high-strength metal that do not have a BCC structure because steels undergo microscopic ductile-brittle transition. Regardless, the maximum impact energy of high-strength steels is still low due to their brittleness. See also Izod impact strength test Brittle Impact force Notes External links Calculator Video on the Charpy impact test Fracture mechanics Materials testing
Charpy impact test
Materials_science,Engineering
1,257
37,662,627
https://en.wikipedia.org/wiki/Greystone%20%28architecture%29
Greystones are a style of residential building most commonly found in Chicago, Illinois, United States. As the name suggests, the buildings are typically grey in color and were most often built with Bedford Limestone quarried from South Central Indiana. In Chicago, there are roughly 30,000 greystones, usually built as a semi- or fully detached townhouse. The term "greystone" is also used to refer to buildings in Montreal, Quebec, Canada (known in French as pierre grise). It refers to the grey limestone facades of many buildings, both residential and institutional, constructed between 1730 and 1920. History and usage The building style first began to appear in the 1890s, initially in neighborhoods like Woodlawn and then North Lawndale, and Lake View, and continued through 1930s with two major approaches in design. The first style, between 1890 and 1905, was Romanesque in nature with arches and cornices. This initial style and the choice of grey limestone occurred as the city rebuilt and grew in economic power after the Great Chicago Fire in 1871, though the buildings were designed for a wide range of socioeconomic classes. The second style was predominately built in a Neoclassical design incorporating smoother limestone blocks featuring columns and bay windows. Greystones were built in a wide variety of sizes to accommodate different residential needs with most being two to three floors in size, many commonly containing two to three flats but some up to six. Regardless of their size, they were always built with the limestone facade facing the street to take advantage of the limited size of standard Chicago lots . There are an estimated 30,000 greystones still remaining in the city and many citizens, architects and preservationists are working to revive those that remain through the Historic Chicago Greystone Initiative. Many greystones are preserved as the multi-family structures which they were designed and built as. Today, greystones often retain original Romanesque or Neoclassical details such as "roughly carved blocks of greystone and intricately carved column capitals," though many were built in other styles. Styles There are many different styles of greystones, with the City of Chicago defining most attributes for the style for landmark status. Romanesque Revival "Heavy, rough-cut stone walls Round arches and squat columns Deeply recessed windows Pressed metal bays and turrets" Queen Anne "Rich but simple ornament Wide variety of materials, including wood, stone and pressed metal Expansive porches Pressed metal bays and turrets Irregular roofline with many dormers and chimneys" Chateauesque "Vertical proportions Massive-looking masonry walls Ornate carved stone ornament High-peaked hipped roofs, elaborate dormers and tall chimneys" Classical Revival/Beaux Arts "Symmetrical facades Minimal use of bays, towers, or other projecting building elements Classical ornament, including columns, cornices and triangular pediments Wide variety of materials, including brick, stone and wood" See also Brownstone References External links Greystone Certification Program American architectural styles Architecture in Illinois Buildings and structures in Chicago Building materials History of Chicago House styles Industrial minerals Limestone
Greystone (architecture)
Physics,Engineering
607
2,993,703
https://en.wikipedia.org/wiki/Data%20position%20measurement
Data position measurement (DPM) is a CD and DVD copy protection mechanism that operates by measuring the physical location of data on an optical disc. Stamped CDs are perfect clones and always have the data at the expected location, while a burned copy would exhibit physical differences. DPM detects these differences to identify user-made copies. DPM was first used publicly in 1996 by Link Data Security's CD-Cops. It was used in volume on Lademans Leksikon published by Egmont in November 1996. RMPS DPM can be observed and subsequently encoded into a recordable media physical signature (RMPS). In concert with emulation software RMPS can reproduce the effects of DPM thereby appearing as an original disc and fooling the protection mechanism. This technique was pioneered by the software Alcohol 120%, for which it created the .mds file format. References "Microsoft buy Danish copy-protection, 1997 (Danish)" Compact Disc and DVD copy protection Optical computer storage
Data position measurement
Technology
203
2,994,661
https://en.wikipedia.org/wiki/Lewis%20number
In fluid dynamics and thermodynamics, the Lewis number (denoted ) is a dimensionless number defined as the ratio of thermal diffusivity to mass diffusivity. It is used to characterize fluid flows where there is simultaneous heat and mass transfer. The Lewis number puts the thickness of the thermal boundary layer in relation to the concentration boundary layer. The Lewis number is defined as . where: is the thermal diffusivity, is the mass diffusivity, is the thermal conductivity, is the density, is the mixture-averaged diffusion coefficient, is the specific heat capacity at constant pressure. In the field of fluid mechanics, many sources define the Lewis number to be the inverse of the above definition. The Lewis number can also be expressed in terms of the Prandtl number () and the Schmidt number (): It is named after Warren K. Lewis (1882–1975), who was the first head of the Chemical Engineering Department at MIT. Some workers in the field of combustion assume (incorrectly) that the Lewis number was named for Bernard Lewis (1899–1993), who for many years was a major figure in the field of combustion research. References Further reading Fluid dynamics Dimensionless numbers of fluid mechanics Combustion
Lewis number
Chemistry,Engineering
254
151,694
https://en.wikipedia.org/wiki/Tar%20%28computing%29
In computing, tar is a computer software utility for collecting many files into one archive file, often referred to as a tarball, for distribution or backup purposes. The name is derived from "tape archive", as it was originally developed to write data to sequential I/O devices with no file system of their own, such as devices that use magnetic tape. The archive data sets created by tar contain various file system parameters, such as name, timestamps, ownership, file-access permissions, and directory organization. POSIX abandoned tar in favor of pax, yet tar sees continued widespread use. History The command-line utility was first introduced in the Version 7 Unix in January 1979, replacing the tp program (which in turn replaced "tap"). The file structure to store this information was standardized in POSIX.1-1988 and later POSIX.1-2001, and became a format supported by most modern file archiving systems. The tar command was abandoned in POSIX.1-2001 in favor of pax command, which was to support ustar file format; the tar command was indicated for withdrawal in favor of pax command at least since 1994. Today, Unix-like operating systems usually include tools to support tar files, as well as utilities commonly used to compress them, such as xz, gzip, and bzip2. The command has also been ported to the IBM i operating system. BSD-tar has been included in Microsoft Windows since Windows 10 April 2018 Update, and there are otherwise multiple third party tools available to read and write these formats on Windows. Rationale Many historic tape drives read and write variable-length data blocks, leaving significant wasted space on the tape between blocks (for the tape to physically start and stop moving). Some tape drives (and raw disks) support only fixed-length data blocks. Also, when writing to any medium such as a file system or network, it takes less time to write one large block than many small blocks. Therefore, the tar command writes data in records of many 512 B blocks. The user can specify a blocking factor, which is the number of blocks per record. The default is 20, producing 10 KiB records. File format There are multiple tar file formats, including historical and current ones. Two tar formats are codified in POSIX: ustar and pax. Not codified but still in current use is the GNU tar format. A tar archive consists of a series of file objects, hence the popular term tarball, referencing how a tarball collects objects of all kinds that stick to its surface. Each file object includes any file data, and is preceded by a 512-byte header record. The file data is written unaltered except that its length is rounded up to a multiple of 512 bytes. The original tar implementation did not care about the contents of the padding bytes, and left the buffer data unaltered, but most modern tar implementations fill the extra space with zeros. The end of an archive is marked by at least two consecutive zero-filled records. (The origin of tar's record size appears to be the 512-byte disk sectors used in the Version 7 Unix file system.) The final block of an archive is padded out to full length with zeros. Header The file header record contains metadata about a file. To ensure portability across different architectures with different byte orderings, the information in the header record is encoded in ASCII. Thus if all the files in an archive are ASCII text files, and have ASCII names, then the archive is essentially an ASCII text file (containing many NUL characters). The fields defined by the original Unix tar format are listed in the table below. The link indicator/file type table includes some modern extensions. When a field is unused it is filled with NUL bytes. The header uses 257 bytes, then is padded with NUL bytes to make it fill a 512 byte record. There is no "magic number" in the header, for file identification. Pre-POSIX.1-1988 (i.e. v7) tar header: The pre-POSIX.1-1988 Link indicator field can have the following values: Some pre-POSIX.1-1988 tar implementations indicated a directory by having a trailing slash (/) in the name. Numeric values are encoded in octal numbers using ASCII digits, with leading zeroes. For historical reasons, a final NUL or space character should also be used. Thus although there are 12 bytes reserved for storing the file size, only 11 octal digits can be stored. This gives a maximum file size of 8 gigabytes on archived files. To overcome this limitation, in 2001 star introduced a base-256 coding that is indicated by setting the high-order bit of the leftmost byte of a numeric field. GNU-tar and BSD-tar followed this idea. Additionally, versions of tar from before the first POSIX standard from 1988 pad the values with spaces instead of zeroes. The checksum is calculated by taking the sum of the unsigned byte values of the header record with the eight checksum bytes taken to be ASCII spaces (decimal value 32). It is stored as a six digit octal number with leading zeroes followed by a NUL and then a space. Various implementations do not adhere to this format. In addition, some historic tar implementations treated bytes as signed. Implementations typically calculate the checksum both ways, and treat it as good if either the signed or unsigned sum matches the included checksum. Unix filesystems support multiple links (names) for the same file. If several such files appear in a tar archive, only the first one is archived as a normal file; the rest are archived as hard links, with the "name of linked file" field set to the first one's name. On extraction, such hard links should be recreated in the file system. UStar format Most modern tar programs read and write archives in the UStar (Unix Standard TAR) format, introduced by the POSIX IEEE P1003.1 standard from 1988. It introduced additional header fields. Older tar programs will ignore the extra information (possibly extracting partially named files), while newer programs will test for the presence of the "ustar" string to determine if the new format is in use. The UStar format allows for longer file names and stores additional information about each file. The maximum filename size is 256, but it is split among a preceding path "filename prefix" and the filename itself, so can be much less. The type flag field can have the following values: POSIX.1-1988 vendor specific extensions using link flag values 'A'–'Z' partially have a different meaning with different vendors and thus are seen as outdated and replaced by the POSIX.1-2001 extensions that also include a vendor tag. Type '7' (Contiguous file) is formally marked as reserved in the POSIX standard, but was meant to indicate files which ought to be contiguously allocated on disk. Few operating systems support creating such files explicitly, and hence most TAR programs do not support them, and will treat type 7 files as if they were type 0 (regular). An exception is older versions of GNU tar, when running on the MASSCOMP RTU (Real Time Unix) operating system, which supported an O_CTG flag to the open() function to request a contiguous file; however, that support was removed from GNU tar version 1.24 onwards. POSIX.1-2001/pax In 1997, Sun proposed a method for adding extensions to the tar format. This method was later accepted for the POSIX.1-2001 standard. This format is known as extended tar format or pax format. The new tar format allows users to add any type of vendor-tagged vendor-specific enhancements. The following tags are defined by the POSIX standard: atime, mtime: all timestamps of a file in arbitrary resolution (most implementations use nanosecond granularity) path: path names of unlimited length and character set coding linkpath: symlink target names of unlimited length and character set coding uname, gname: user and group names of unlimited length and character set coding size: files with unlimited size (the historic tar format is 8 GB) uid, gid: userid and groupid without size limitation (the historic tar format is limited to a max. id of 2097151) a character set definition for path names and user/group names (UTF-8) In 2001, the Star program became the first tar to support the new format. In 2004, GNU tar supported the new format, though it does not write it as its default output from the tar program yet. The pax format is designed so that all implementations able to read the UStar format will be able to read the pax format as well. The only exceptions are files that make use of extended features, such as longer file names. For compatibility, these are encoded in the tar files as special or type files, typically under a directory. A pax-supporting implementation would make use of the information, while non-supporting ones like 7-Zip would process them as additional files. Features of the archival utilities Besides creating and extracting archives, the functionality of the various archival utilities varies. For example, implementations might automatically detect the format of compressed TAR archives for extraction so the user does not have to specify it, and let the user limit adding files to those modified after a specified date. Uses Command syntax tar [-options] <name of the tar archive> [files or directories which to add into archive] Basic options: -c, --create — create a new archive; -a, --auto-compress — additionally compress the archive with a compressor which will be automatically determined by the file name extension of the archive. If the archive's name ends with then use gzip, if then use xz, for Zstandard etc.; -r, --append — append files to the end of an archive; -x, --extract, --get — extract files from an archive; -f, --file — specify the archive's name; -t, --list — show a list of files and folders in the archive; -v, --verbose — show a list of processed files. Basic usage Create an archive file from the file and directory : $ tar -cvf archive.tar README.txt src Extract contents for the into the current directory: $ tar -xvf archive.tar Create an archive file from the file and directory and compress it with gzip : $ tar -cavf archive.tar.gz README.txt src Extract contents for the into the current directory: $ tar -xvf archive.tar.gz Tarpipe A tarpipe is the method of creating an archive on the standard output file of the tar utility and piping it to another tar process on its standard input, working in another directory, where it is unpacked. This process copies an entire source directory tree including all special files, for example: $ tar cf - srcdir | tar x -C destdir Software distribution The tar format continues to be used extensively for open-source software distribution. *NIX-distributions use it in various source- and binary-package distribution mechanisms, with most software source code made available in compressed tar archives. Limitations The original tar format was created in the early days of Unix, and despite current widespread use, many of its design features are considered dated. Other formats have been created to address the shortcomings of tar. File names Due to the field size, the original TAR format was unable to store file paths and names in excess of 100 characters. To overcome this problem while maintaining readability by existing TAR utilities, GNU tar stores file paths and names in excess of the 100 characters are stored in @LongLink entries that would be seen as ordinary files by TAR utilities unaware of this feature. Similarly, the PAX format uses PaxHeaders entries. Attributes Many older tar implementations do not record nor restore extended attributes (xattrs) or access-control lists (ACLs). In 2001, Star introduced support for ACLs and extended attributes, through its own tags for POSIX.1-2001 pax. bsdtar uses the star extensions to support ACLs. More recent versions of GNU tar support Linux extended attributes, reimplementing star extensions. A number of extensions are reviewed in the filetype manual for BSD tar, tar(5). Tarbomb A tarbomb, in hacker slang, is a tarball containing a large number of items whose contents are written to the current directory or some other existing directory when untarred instead of the directory created by the tarball specifically for the extracted outputs. It is at best an inconvenience to the user, who is obliged to identify and delete a number of files interspersed with the directory's other contents. Such behavior is considered bad etiquette on the part of the archive's creator. A related problem is the use of absolute paths or parent directory references when creating tar files. Files extracted from such archives will often be created in unusual locations outside the working directory and, like a tarbomb, have the potential to overwrite existing files. However, modern versions of FreeBSD and GNU tar do not create or extract absolute paths and parent-directory references by default, unless it is explicitly allowed with the flag or the option . The bsdtar program, which is also available on many operating systems and is the default tar utility on Mac OS X v10.6, also does not follow parent-directory references or symbolic links. If a user has only a very old tar available, which does not feature those security measures, these problems can be mitigated by first examining a tar file using the command tar tf archive.tar, which lists the contents and allows to exclude problematic files afterwards. These commands do not extract any files, but display the names of all files in the archive. If any are problematic, the user can create a new empty directory and extract the archive into it—or avoid the tar file entirely. Most graphical tools can display the contents of the archive before extracting them. Vim can open tar archives and display their contents. GNU Emacs is also able to open a tar archive and display its contents in a dired buffer. Random access The tar format was designed without a centralized index or table of content for files and their properties for streaming to tape backup devices. The archive must be read sequentially to list or extract files. For large tar archives, this causes a performance penalty, making tar archives unsuitable for situations that often require random access to individual files. With a well-formed tar file stored on a seekable (i.e. allows efficient random reads) medium, the program can still relatively quickly (in linear time relative to file count) look for a file by skipping file reads according to the "size" field in the file headers. This is the basis for option in GNU tar. When a tar file is compressed whole, the compression format, being usually non-seekable, prevents this optimization from being done. A number of "indexed" compressors, which are aware of the tar format, can restore this feature for compressed files. To maintain seekability, tar files must be also concatenated properly, by removing the trailing zero block at the end of each file. Duplicates Another issue with tar format is that it allows several (possibly different) files in archive to have identical paths and filenames. When extracting such archive, usually the latter version of a file overwrites the former. This can create a non-explicit (unobvious) tarbomb, which technically does not contain files with absolute paths or referring to parent directories, but still causes overwriting files outside current directory (for example, archive may contain two files with the same path and filename, first of which is a symlink to some location outside current directory, and second of which is a regular file; then extracting such archive on some tar implementations may cause writing to the location pointed to by the symlink). Key implementations Historically, many systems have implemented tar, and many general file archivers have at least partial support for tar (often using one of the implementations below). The history of tar is a story of incompatibilities, known as the "tar wars". Most tar implementations can also read and create cpio and pax (the latter actually is a tar-format with POSIX-2001-extensions). Key implementations in order of origin: Solaris tar, based on the original Unix V7 tar and comes as the default on the Solaris operating system GNU tar is the default on most Linux distributions. It is based on the public domain implementation pdtar which started in 1987. Recent versions can use various formats, including ustar, pax, GNU and v7 formats. FreeBSD tar (also BSD tar) has become the default tar on most Berkeley Software Distribution-based operating systems including Mac OS X. The core functionality is available as libarchive for inclusion in other applications. This implementation automatically detects the format of the file and can extract from tar, pax, cpio, zip, rar, ar, xar, rpm and ISO 9660 cdrom images. It also comes with a functionally equivalent cpio command-line interface. Schily tar, better known as star (, ), is historically significant as some of its extensions were quite popular. First published in April 1997, its developer has stated that he began development in 1982. Python tarfile module supports multiple tar formats, including ustar, pax and gnu; it can read but not create V7 format and the SunOS tar extended format; pax is the default format for creation of archives. Available since 2003. Additionally, most pax and cpio implementations can read and create multiple types of tar files. Suffixes for compressed files tar archive files usually have the file suffix .tar (e.g. somefile.tar). A tar archive file contains uncompressed byte streams of the files which it contains. To achieve archive compression, a variety of compression programs are available, such as gzip, bzip2, xz, lzip, lzma, zstd, or compress, which compress the entire tar archive. Typically, the compressed form of the archive receives a filename by appending the format-specific compressor suffix to the archive file name. For example, a tar archive archive.tar, is named archive.tar.gz, when it is compressed by gzip. Popular tar programs like the BSD and GNU versions of tar support the command line options Z (compress), z (gzip), and j (bzip2) to compress or decompress the archive file upon creation or unpacking. Relatively recent additions include --lzma (LZMA), --lzop (lzop), --xz or J (xz), --lzip (lzip), and --zstd. The decompression of these formats is handled automatically if supported filename extensions are used, and compression is handled automatically using the same filename extensions if the option --auto-compress (short form -a) is passed to an applicable version of GNU tar. BSD tar detects an even wider range of compressors (lrzip, lz4), using not the filename but the data within. Unrecognized formats are to be manually compressed or decompressed by piping. MS-DOS's 8.3 filename limitations resulted in additional conventions for naming compressed tar archives. However, this practice has declined with FAT now offering long filenames. See also Comparison of file archivers Comparison of archive formats List of archive formats List of Unix commands References External links X/Open CAE Specification Commands and Utilities Issue 4, Version 2 (pdf), 1994, opengroup.org – indicates tar as to be withdrawn tar in The Single UNIX Specification, Version 2, 1997, opengroup.org – indicates applications should migrate to pax utility C.4 Utilities in The Open Group Base Specifications Issue 6, 2004 Edition, opengroup.org – indicates tar as removed – specifies the ustar and pax file formats – manual from GNU TAR - Windows CMD - SS64.com Archive formats Free backup software GNU Project software Unix archivers and compression-related utilities Plan 9 commands IBM i Qshell commands
Tar (computing)
Technology
4,310
47,666,123
https://en.wikipedia.org/wiki/Journal%20of%20Integer%20Sequences
The Journal of Integer Sequences is a peer-reviewed open-access academic journal in mathematics, specializing in research papers about integer sequences. It was founded in 1998 by Neil Sloane. Sloane had previously published two books on integer sequences, and in 1996 he founded the On-Line Encyclopedia of Integer Sequences (OEIS). Needing an outlet for research papers concerning the sequences he was collecting in the OEIS, he founded the journal. Since 2002 the journal has been hosted by the David R. Cheriton School of Computer Science at the University of Waterloo, with Waterloo professor Jeffrey Shallit as its editor-in-chief. There are no page charges for authors, and all papers are free to all readers. The journal publishes approximately 50–75 papers annually. In most years from 1999 to 2014, SCImago Journal Rank has ranked the Journal of Integer Sequences as a third-quartile journal in discrete mathematics and combinatorics. It is indexed by Mathematical Reviews and Zentralblatt MATH. References External links Mathematics journals Open access journals Academic journals established in 1998 English-language journals Irregular journals
Journal of Integer Sequences
Mathematics
221
430,988
https://en.wikipedia.org/wiki/Mean%20time%20to%20recovery
Mean time to recovery (MTTR) is the average time that a device will take to recover from any failure. Examples of such devices range from self-resetting fuses (where the MTTR would be very short, probably seconds), to whole systems which have to be repaired or replaced. The MTTR would usually be part of a maintenance contract, where the user would pay more for a system MTTR of which was 24 hours, than for one of, say, 7 days. This does not mean the supplier is guaranteeing to have the system up and running again within 24 hours (or 7 days) of being notified of the failure. It does mean the average repair time will tend towards 24 hours (or 7 days). A more useful maintenance contract measure is the maximum time to recovery which can be easily measured and the supplier held accountably. Note that some suppliers will interpret MTTR to mean 'mean time to respond' and others will take it to mean 'mean time to replace/repair/recover/resolve'. The former indicates that the supplier will acknowledge a problem and initiate mitigation within a certain timeframe. Some systems may have an MTTR of zero, which means that they have redundant components which can take over the instant the primary one fails, see RAID for example. However, the failed device involved in this redundant configuration still needs to be returned to service and hence the device itself has a non-zero MTTR even if the system as a whole (through redundancy) has an MTTR of zero. But, as long as service is maintained, this is a minor issue. See also Mean time to repair Mean time between failures Mean down time Service-level agreement References Disaster recovery Failure Reliability engineering
Mean time to recovery
Engineering
352
2,898,227
https://en.wikipedia.org/wiki/Launch%20control%20%28automotive%29
Launch control is an electronic aid to assist drivers of both racing and street cars to accelerate from a standing start. Motorcycles have been variously fitted with mechanical and electronic devices for both street and race. Popular automobiles with launch control include the BMW M series, certain marques of the Volkswagen Group with Direct-Shift Gearbox (most notably the Bugatti Veyron), Porsche 911 (sport+ mode), Panamera Turbo, Alfa Romeo with TCT gearbox and certain General Motors products. Mitsubishi also incorporated launch control into their Twin Clutch SST gearbox, on its "S-Sport" mode, but the mode is only available in the Evolution X MR and MR Touring (USDM). The Jaguar F-Type includes launch control. The Nissan GT-R has electronics to control launch but the company does not use the term "launch control" since some owners have equated the term with turning off the stability control to launch the car, which may void the warranty of the drivetrain. One version of Nissan GT-R allows user to launch the car by turning the Traction Control to "R" mode. Operation Launch control is, in essence, a second rev limiter. Launch control operates by using an electronic accelerator and a computer program in a drive-by-wire application, and through fuel or spark cut in a mechanical throttle application. The software can control acceleration based on engine specifications to make the car accelerate smoothly and as fast as possible, avoiding spinning of the drive wheels, engine failure due to over-revving and clutch and gearbox problems. Looking more in depth, launch control holds the engine's RPM at a set number, allowing the car to build power before the computer or operator engages the clutch. In racing cars, this feature is only available at the start of the race, when the car is stationary in the starting grid. After the car is running at a certain speed, the software is disabled. Traditional launch control is only feasible in a road car with any car with a clutch or clutch pack, which includes cars with a manual transmission or dual-clutch transmission. A road car with an automatic transmission typically has the software perform brake-torquing, the action of holding the car at wide-open throttle while also holding the brake so the car doesn't move, which is separate from launch control. Aftermarket launch control Two-Step Rev Limiting Modern vehicles are increasingly becoming equipped with launch control features available straight from the factory. However, if a vehicle doesn’t come equipped with such features, then aftermarket forms of launch control can be purchased and installed. A common form of aftermarket launch control is commonly known as two-step rev limiting. A two-step rev limiter is a module that regulates the engines rpms for a controlled launch and optimal power settings. Two step limiting confines rpms at two separate points. The first point is programmed to limit the revolutions to a desirable launch range and the second point is limited to protect the engine from over revving. The limiting itself is controlled through the modulator by regulating the fuel and ignition. Once the desired revolutions are met the two-step system will adjust these parameters allowing for power production to cease until released. Its important to note that aftermarket two step rev limitation is only a viable option with a manual transmission. Launch control for an automatic transmission car requires a different set up. Reason for use Racing drivers have only a very short time at the start of a race in which to achieve competitive acceleration. High power delivery to the gearbox and driven wheels cannot easily be managed even by the most skilled drivers. Launch control is also highly useful in turbocharged engines. Due to the nature of how a turbocharger works, you cannot ensure that you have 100 percent of the engine's torque at a moment's notice. With a launch control system, you can ensure that the turbocharger receives enough exhaust pressure to maintain boost pressure. You can further amplify this effect by having the launch control software progressively retard ignition timing up to a set RPM. Launch control was originally intended to give cars the ability to accelerate as fast as possible regarding optimal engine conditions from a stop. However, car communities around the United States have begun to organize events surrounded around the byproduct of launch control systems, this byproduct is usually called a backfire. Using aftermarket launch control systems allows for drivers to manipulate the fuel and ignition settings. To create a backfire, the ignition settings are turned down allowing for a build up of excess fuel which creates a larger combustion producing loud bangs and pops from the exhaust. In some instances the launch control systems are modified to produce large flames that also expel from the exhaust pipe. Competitions are held in car communities based on achieving the loudest backfire or producing the largest flame. History Developments in electronics in the 1980s enabled the introduction of launch control. In 1985, Renault's RE60 F1 car stored information on a diskette which was later unloaded at the pits, giving the engineers detailed data about the car's behavior. Later, telemetry allowed the data to be sent by radio between the pits and the car. Increasing the use of electronics on the car allowed engineers to modify the settings of certain parameters whilst it was on the track, which is called bi-directional telemetry. Among the electronic driving aids were a semi-automatic transmission, an anti-lock braking system (ABS), a traction control system, and active suspension. The 1993 Williams FW15C model featured all of these aids. This trend was ended by the FIA when it outlawed these aids for the 1994 season, considering that they reduced the importance of driver skill to too great a degree. Bi-directional telemetry was also forbidden, which was soon reinstated as the FIA found it too hard to analyze the engine programmes in order to search for hidden code that could be found breaking the rules. Fully-automatic transmissions, traction control, and launch control were allowed again from the 2001 Spanish Grand Prix, but as of the 2004 and 2008 season, they were outlawed in order to reduce the money needed for a competitive F1 team. From being a feature that was predominately seen only in race cars, launch control is now featured in almost all modern consumer car brands. Brands such as BMW, Dodge, and Mercedes all have implemented a launch feature in select models of their vehicles. Motorcycle usage Street motorcycles have been fitted with factory devices to balance power characteristics to rider requirements. Competition entrants can call it "holeshot". Race machines are increasingly using additional suspension-altering technology to lower the stance and aid aerodynamics. Motocross bikes use mechanical holeshot devices to temporarily compress the front suspension prior to race-start. Gallery References External links Ford Falcon XR6 Turbo Launch control Cars With Launch Control: Houston, We Have Lift Off! Automotive technologies Formula One Mechanical power control
Launch control (automotive)
Physics
1,394
21,463,296
https://en.wikipedia.org/wiki/Game%20theory%20in%20communication%20networks
Game theory has been used as a tool for modeling and studying interactions between cognitive radios envisioned to operate in future communications systems. Such terminals will have the capability to adapt to the context they operate in, through possibly power and rate control as well as channel selection. Software agents embedded in these terminals will potentially be selfish, meaning they will only try to maximize the throughput/connectivity of the terminal they function for, as opposed to maximizing the welfare (total capacity) of the system they operate in. Thus, the potential interactions among them can be modeled through non-cooperative games. The researchers in this field often strive to determine the stable operating points of systems composed of such selfish terminals, and try to come up with a minimum set of rules (etiquette) so as to make sure that the optimality loss compared to a cooperative – centrally controlled setting – is kept at a minimum. Applications of non-cooperative game theory in wireless networks research Game theory is the study of strategic decision making. More formally, it is "the study of mathematical models of conflict and cooperation between intelligent rational decision-makers." An alternative term suggested "as a more descriptive name for the discipline" is interactive decision theory. Game theory is mainly used in economics, political science, and psychology, as well as logic and biology. The subject first addressed zero-sum games, such that one person's gains exactly equal net losses of the other participant(s). Today, however, game theory applies to a wide range of class relations, and has developed into an umbrella term for the logical side of science, to include both human and non-humans, like computers. Classic uses include a sense of balance in numerous games, where each person has found or developed a tactic that cannot successfully better his results, given the other approach. Game theory has been used extensively in wireless networks research to develop understanding of stable operation points for networks made of autonomous/selfish nodes. The nodes are considered as the players. Utility functions are often chosen to correspond to achieved connection rate or similar technical metrics. The studies done in this context can be grouped as below: Medium access games for 802.11 WLAN Various studies have analyzed radio resource management problems in 802.11 WLAN networks. In such random access studies, researchers have considered selfish nodes, who try to maximize their own utility (throughput) only, and control their channel access probabilities to maximize their utilities. Power control games in CDMA systems Power control refers to the process through which mobiles in CDMA cellular settings adjust their transmission powers so that they do not create unnecessary interference to other mobiles, trying, nevertheless, to achieve the required quality of service. Power control can be centralized in nature, where the base station dictates and assigns transmitter power levels to mobiles based on their link qualities, or they can be distributed, in which mobiles update their powers autonomously, independent of the base station, based on perceived service quality. In such distributed settings, the mobiles can be considered to be selfish agents (players) who try to maximize their utilities (often modeled as corresponding throughputs). Game theory is considered to be a powerful tool to study such scenarios. Applications of cooperative game theory (coalitions) in wireless networks research Coalitional game theory in wireless networks Coalitional game theory is a branch of game theory that deals with cooperative behavior. In a coalitional game, the key idea is to study the formation of cooperative groups, i.e., coalitions among a number of players. By cooperating, the players can strengthen their position in a given game as well as improve their utilities. In this context, coalitional game theory proves to be a powerful tool for modeling cooperative behavior in many wireless networking applications such as cognitive radio networks, wireless system, physical layer security, virtual MIMO, among others. See also Mesh networking References Network theory Network performance
Game theory in communication networks
Mathematics
786
39,541,470
https://en.wikipedia.org/wiki/Divided%20visual%20field%20paradigm
The Divided Visual Field Paradigm is an experimental technique that involves measuring task performance when visual stimuli are presented on the left or right visual hemifields. If a visual stimulus appears in the left visual field (LVF), the visual information is initially projected to the right cerebral hemisphere (RH), and conversely, if a visual stimulus appears in the right visual field (RVF), the visual information is initially received by the left cerebral hemisphere (LH). In this way, if a cerebral hemisphere has functional advantages with some aspect of a particular task, an experimenter might observe improvements in task performance when the visual information is presented on the contralateral visual field. Background The divided visual field paradigm capitalizes on the lateralization of the visual system. Each cerebral hemisphere only receives information from one half of the visual field—specifically, from the contralateral hemifield. For example, retinal projections from ganglion cells in the left eye that receive information from the left visual field cross to the right hemisphere at the optic chiasm; while information from the right visual field received by the left eye will not cross at the optic chiasm, and will remain on the left hemisphere. Stimuli presented on the right visual field (RVF) will ultimately be processed first by the left hemisphere's (LH) occipital cortex, while stimuli presented on the left visual field (LVF) will be processed first by the right hemisphere's (RH) occipital cortex. Because lateralized visual information is initially segregated between the two cerebral hemispheres, any differences in task performance (e.g., improved response time) between LVF/RVF conditions might be interpreted as differences in the RH or LH's ability to perform the task. Methodology To enable the lateralized presentation of visual stimuli, participants must first be fixated at a centralized location, and must be unable to anticipate whether an upcoming stimulus will be presented to the right or left of fixation. Because the center of the visual field, the fovea, may project bilaterally to both RH and LH, lateralized stimuli should appear sufficiently far from fixation. Researchers recommend that the inside edge of any visual stimulus should be between 2.5° and 3° from central fixation Lateralized stimuli must also be presented very briefly, to eliminate the participant's ability to make an eye-movement toward the lateralized stimulus (which would result in the stimulus no longer being lateralized, and instead projected to both cerebral hemispheres). Since saccadic latencies to a lateralized stimulus can be as fast as 150ms following stimulus onset, the lateralized stimulus should only be presented for a duration of 180ms at most. A free software tool called the "Lateralizer" has been developed for piloting and conducting customizable experiments using the divided visual field paradigm. Limitations A significant difference between RVF/LH and LVF/RH task performance using the divided visual field paradigm does provide evidence of a functional asymmetry between the two cerebral hemispheres. However, as described by Ivry and Robertson (1998), there are limitations to the types of inferences that can be made from this technique: These [divided visual field] methods have their limitations. A critical assumption has been that differences in performance with lateralized stimuli nearly always reflect functional differences between the two hemispheres. This is an extremely strong assumption. Researchers have tended to ignore or downplay the fact that asymmetries in brain function cannot be directly observed with these methods. It would require a leap of faith to assume that there is a straightforward mapping between lateralizing a stimulus and producing disproportionate activation throughout the contralateral hemisphere. Normal subjects have an intact corpus callosum, which provides for the rapid transfer of information from one hemisphere to the other. Visual information can be transferred from one cerebral hemisphere to the other in as little as 3ms, so any task differences greater than 3ms may represent asymmetries in neural dynamics that are more complex than a single hemisphere's simple dominance for a particular task. Moreover, the divided visual field technique represents a relatively coarse and indirect method for localizing brain regions associated with cognitive function. Other neuroimaging techniques, including fMRI, PET, and EEG, will provide more spatial resolution, and more direct measures of neural activity. However, these methods are significantly more costly than the divided visual field paradigm. References Neuroscience Perception Medical tests Pathology
Divided visual field paradigm
Biology
921
7,912,154
https://en.wikipedia.org/wiki/Body%20Wars
Body Wars was a motion simulator attraction inside the Wonders of Life pavilion at the Walt Disney World Resort's Epcot. Riders would be taken on a mission by the fictional Miniaturized Exploration Technologies corporation (Stylized as MET) to study the effects of the white blood cells on a splinter inside the left index finger of a volunteer. The attraction used the Advanced Technology Leisure Application Simulator technology previously seen at Disneyland's Star Tours attraction. The ride is no longer in operation along with the other attractions inside the Wonders of Life pavilion, which opened on October 19, 1989, and closed on January 1, 2007. History On January 22, 1988, Epcot announced that they would be building a new pavilion in Future World East. It would be called Wonders of Life and be themed to health care. The pavilion would be located between Universe of Energy and Horizons. Wonders of Life would include new restaurants, stores and several attractions. One of these attractions would be a motion simulator ride named Body Wars. The sponsor of Wonders of Life would be MetLife. Construction of the pavilion began in February of that year. The Wonders of Life pavilion would officially open to the general public on October 19, 1989. Upon opening, Body Wars had a wait time of 90 minutes. Just two months after opening, a similar ride named Star Tours opened at Disney's Hollywood Studios. Body Wars received a mixed reception from guests, as some praised the thrilling experience, but others complained of motion sickness and nausea since it was considered to be a rough ride. In the late 90’s, tensions between MetLife and Disney began to occur as MetLife would often setup tables at the pavilion to sell park guests life insurance which is a not allowed. After MetLife's sponsorship expired in June 2001, Epcot would continue to operate the Wonders of Life pavilion. The popularity of Body Wars began to decline over the years. In 2004, the park announced that the attraction would begin seasonal operation. The entire pavilion would officially close on January 1, 2007. Attraction description Queue Guests entered the queue on the left side of the Wonders of Life Dome. If the attraction was in high demand, an extended queue would be utilized, decorated with signage and pastel colored shapes lining the walls. This would lead into the main queue, contained within a separate external wing of the building. As guests entered, they were informed via in-queue announcements of details surrounding the fictional MET company. The guests were referred to as "MET Observation Team Members", and would be informed via a preshow shown within the queue of the mission that they would be going on, with another volunteer who had a bruise on his arm, but wasn't shown inside of. The queue would begin with the logo of the MET Company, with various images depicting the company and the inside of the human body. Until 1993, signage would be hung up stating that the company was founded in 2063, as well as their motto "Pioneering the Universe Within". This would lead into the first of two "Dermatopic Purification" stations, before a hallway with in queue TV sets, and the second of two "Dermatopic Purification" stations. Boarding Dr. Cynthia Lair had volunteered to be miniaturized to observe a splinter. The guests were told they would board vehicle Bravo 229 and would be shrunk. Their mission was to meet up with Dr. Lair and bring her out. Captain Braddock would be the guests' pilot. Guests learned that their "LGS 250"-type probe vehicle weighed approximately 26 tons, but once miniaturized, weighed less than a drop of water. Ride The guests' vehicle, Bravo 229, moved from the bay to the miniaturization room, where technicians focused a "particle reducer" on the ship. The ship and crew mates were shrunk and sent into the subject's body, under his skin. White blood cells were seen on their way to destroy his splinter. The guests arrived at the splinter, meeting with Dr. Cynthia Lair. She began to take a cell count when she was accidentally pulled into a capillary. Captain Braddock followed Dr. Lair into the vein, entering an unauthorized area. The captain steered Bravo past the heart and into the right ventricle. The guests entered the lungs where Dr. Lair was being attacked by a white blood cell. Braddock used his lasers to free Dr. Lair. By now, the ship was very low on power. Dr. Lair suggested that they use the brain's energy to recharge the ship. Passing the heart's left atrium, the ship went through the artery to get to the brain. A neuron contacted the ship, allowing it to regain power and de-miniaturize outside of the subject's body. As the subject sits up, Mission Control congratulates Braddock, Lair, and the guests on pulling off the most spectacular mission in the history of MET. Attraction facts Cast: Jenifer Lewis as Ride Queue Instructional Video Announcer (uncredited) Tim Matheson as Captain Braddock Dakin Matthews as Mission Control Elisabeth Shue as Dr. Cynthia Lair John Reilly as Subject in pre-show (uncredited) Dayna Beilenson as Scientist (uncredited) Vehicle names: (all bays and vehicles were fictional except for Bravo 229) Bay #1: "Zulu 174" Bay #2: "Bravo 229" Bay #3: "Sierra 657" and "Foxtrot 817" Bay #4: "Charlie 218" Used same ATLAS Technology as Star Tours. Current status As of November 2014, the four simulators have been dismantled and removed from the ride building. The queue is still intact, but most of the lighting and electronic equipment has been removed. The show building is currently used for storage for the Epcot Food & Wine Festival, along with the Flower & Garden Festival. As of November 2016, the queue is being slowly dismantled while few remnants remain. The pavilion is currently being transformed into the new Play! pavilion (construction is under way). The exit area has had the same treatment, with all signage removed. Red archive tags have been applied to the beginning MET Sign, and to the Body Wars safety information sign near the exit. On February 21, 2019, it was announced that the new Play! Pavilion would be replacing the entire Wonders of Life pavilion, including the ride. What will happen to Body Wars during the transformation is currently unknown. See also Epcot attraction and entertainment history Wonders of Life Incidents at Walt Disney World Advanced Technology Leisure Application Simulator - the technology underlying Body Wars. Fantastic Voyage References External links Amusement rides introduced in 1989 Amusement rides that closed in 2007 Amusement rides manufactured by Rediffusion Simulation 1989 films Human body Former Walt Disney Parks and Resorts attractions Epcot Simulator rides Walt Disney Parks and Resorts films Future World (Epcot) Films scored by Leonard Rosenman Films directed by Leonard Nimoy 1989 establishments in Florida 2007 disestablishments in Florida
Body Wars
Physics
1,406
267,787
https://en.wikipedia.org/wiki/Rubbing%20alcohol
Rubbing alcohol, also known as surgical spirit in some regions, refers to a group of denatured alcohols commonly used as topical antiseptics. These solutions are primarily composed of either isopropyl alcohol (isopropanol) or ethanol, with isopropyl alcohol being the more widely available formulation. Rubbing alcohol is rendered undrinkable by the addition of bitterants or other denaturants. In the British Pharmacopoeia, the equivalent product is called surgical spirit. Beyond antiseptic uses, rubbing alcohol has various industrial and household applications. In North American English, the term "rubbing alcohol" generally encompasses both isopropyl and ethanol-based products. The United States Pharmacopeia (USP) defines "isopropyl rubbing alcohol USP" as containing approximately 70 percent alcohol by volume of pure isopropyl alcohol and defines "rubbing alcohol USP" as containing approximately 70 percent by volume of denatured alcohol. In Ireland and the UK, the comparable preparation is surgical spirit B.P., which the British Pharmacopoeia defines as 95% methylated spirit, 2.5% castor oil, 2% diethyl phthalate, and 0.5% methyl salicylate. Under its alternative name of "wintergreen oil", methyl salicylate is a common additive to North American rubbing alcohol products. Individual manufacturers are permitted to use their own formulation standards in which the ethanol content for retail bottles of rubbing alcohol is labeled as and ranges from 70 to 99% v/v. All rubbing alcohols are unsafe for human consumption: isopropyl rubbing alcohols do not contain the ethyl alcohol of alcoholic beverages; ethyl rubbing alcohols are based on denatured alcohol, which is a combination of ethyl alcohol and one or more bitter poisons that make the substance toxic. History The term "rubbing alcohol" came into prominence in North America during the Prohibition era of 1920 to 1933, when alcoholic beverages were prohibited throughout the United States. The term "rubbing" emphasized that this alcohol was not intended for consumption. Nevertheless it was well documented as a surrogate alcohol as early as 1925. Alcohol was already widely used as a liniment for massage. There was no standard formula for rubbing alcohol, which was sometimes perfumed with additives such as wintergreen oil (methyl salicylate). Properties All rubbing alcohols are volatile and flammable. Ethyl rubbing alcohol has an extremely bitter taste from additives. The specific gravity of Formula 23-H is between 0.8691 and 0.8771 at . Isopropyl rubbing alcohols contain from 50% to 99% by volume of isopropyl alcohol, the remainder consisting of water. Boiling points vary with the proportion of isopropyl alcohol from ; likewise, freezing points vary from . Surgical spirit BP boils at . Naturally colorless, products may contain color additives. They may also contain medically-inactive additives for fragrance, such as wintergreen oil (methyl salicylate), or for other purposes. US legislation To protect alcohol tax revenue in the United States, all preparations classified as Rubbing Alcohols (defined as those containing ethanol) must have poisonous additives to limit human consumption in accordance with the requirements of the US Treasury Department, Bureau of Alcohol, Tobacco, and Firearms, using Formula 23-H (8 parts by volume of acetone, 1.5 parts by volume of methyl isobutyl ketone, and 100 parts by volume of ethyl alcohol). It contains 87.5–91% by volume of absolute ethyl alcohol. The rest consists of water and the denaturants, with or without color additives, and perfume oils. Rubbing alcohol contains in each 100 ml more than 355 mg of sucrose octaacetate or more than 1.40 mg of denatonium benzoate. The preparation may be colored with one or more color additives. A suitable stabilizer may also be added. Warnings Product labels for rubbing alcohol include a number of warnings about the chemical, including the flammability hazards and its intended use only as a topical antiseptic and not for internal wounds or consumption. It should be used in a well-ventilated area due to inhalation hazards. Poisoning can occur from ingestion, inhalation, absorption, or consumption of rubbing alcohol. References External links Why Is Drinking Rubbing Alcohol Bad? Antiseptics Cleaning products Household chemicals
Rubbing alcohol
Chemistry
930
165,320
https://en.wikipedia.org/wiki/Jodrell%20Bank%20Observatory
Jodrell Bank Observatory ( ) in Cheshire, England hosts a number of radio telescopes as part of the Jodrell Bank Centre for Astrophysics at the University of Manchester. The observatory was established in 1945 by Bernard Lovell, a radio astronomer at the university, to investigate cosmic rays after his work on radar in the Second World War. It has since played an important role in the research of meteoroids, quasars, pulsars, masers, and gravitational lenses, and was heavily involved with the tracking of space probes at the start of the Space Age. The main telescope at the observatory is the Lovell Telescope. Its diameter of makes it the third largest steerable radio telescope in the world. There are three other active telescopes at the observatory; the Mark II and and 7 m diameter radio telescopes. Jodrell Bank Observatory is the base of the Multi-Element Radio Linked Interferometer Network (MERLIN), a National Facility run by the University of Manchester on behalf of the Science and Technology Facilities Council. The Jodrell Bank Visitor Centre and an arboretum are in Lower Withington, and the Lovell Telescope and the observatory near Goostrey and Holmes Chapel. The observatory is reached from the A535. The Crewe to Manchester Line passes by the site, and Goostrey station is a short distance away. In 2019, the observatory became a UNESCO World Heritage Site. Early years Jodrell Bank was first used for academic purposes in 1939 when the University of Manchester's Department of Botany purchased three fields from the Leighs. It is named from a nearby rise in the ground, Jodrell Bank, which was named after William Jauderell, an archer whose descendants lived at the mansion that is now Terra Nova School. The site was extended in 1952 by the purchase of a farm from George Massey on which the Lovell Telescope was built. The site was first used for astrophysics in 1945, when Bernard Lovell used some equipment left over from World War II, including a gun laying radar, to investigate cosmic rays. The equipment was a GL II radar system working at a wavelength of 4.2 m, provided by J. S. Hey. He intended to use the equipment in Manchester, but electrical interference from the trams on Oxford Road prevented him from doing so. He moved the equipment to Jodrell Bank, south of the city, on 10 December 1945. Lovell's main research was transient radio echoes, which he confirmed were from ionized meteor trails by October 1946. The first staff were Alf Dean and Frank Foden who observed meteors with the naked eye while Lovell observed the electromagnetic signal using equipment. The first time Lovell turned the radar on – 14 December 1945 – the Geminids meteor shower was at a maximum. Over the next few years, Lovell accumulated more ex-military radio hardware, including a portable cabin, known as a "Park Royal" in the military (see Park Royal Vehicles). The first permanent building was near to the cabin and was named after it. Searchlight telescope A searchlight was loaned to Jodrell Bank in 1946 by the army; a broadside array, was constructed on its mount by J. Clegg. It consisted of 7 elements of Yagi–Uda antennas. It was used for astronomical observations in October 1946. On 9 and 10 October 1946, the telescope observed ionisation in the atmosphere caused by meteors in the Giacobinids meteor shower. When the antenna was turned by 90 degrees at the maximum of the shower, the number of detections dropped to the background level, proving that the transient signals detected by radar were from meteors. The telescope was then used to determine the radiant points for meteors. This was possible as the echo rate is at a minimum at the radiant point, and a maximum at 90 degrees to it. The telescope and other receivers on the site studied the auroral streamers that were visible in early August 1947. Transit Telescope The Transit Telescope was a parabolic reflector zenith telescope built in 1947. At the time, it was the world's largest radio telescope. It consisted of a wire mesh suspended from a ring of scaffold poles, which focussed radio signals on a focal point above the ground. The telescope mainly looked directly upwards, but the direction of the beam could be changed by small amounts by tilting the mast to change the position of the focal point. The focal mast was changed from timber to steel before construction was complete. The telescope was replaced by the steerable Lovell Telescope, and the Mark II telescope was subsequently built at the same location. The telescope could map a ± 15-degree strip around the zenith at 72 and 160 MHz, with a resolution at 160 MHz of 1 degree. It discovered radio noise from the Great Nebula in Andromeda – the first definite detection of an extragalactic radio source – and the remnants of Tycho's Supernova in the radio frequency; at the time it had not been discovered by optical astronomy. Lovell Telescope The "Mark I" telescope, now known as the Lovell Telescope, was the world's largest steerable dish radio telescope, in diameter, when it was constructed in 1957; it is now the third largest, after the Green Bank telescope in West Virginia and the Effelsberg telescope in Germany. Part of the gun turret mechanisms from the First World War battleships and were reused in the telescope's motor system. The telescope became operational in mid-1957, in time for the launch of the Soviet Union's Sputnik 1, the world's first artificial satellite. The telescope was the only one able to track Sputnik's booster rocket by radar; first locating it just before midnight on 12 October 1957, eight days after its launch. In the following years, the telescope tracked various space probes. Between 11 March and 12 June 1960, it tracked the United States' NASA-launched Pioneer 5 probe. The telescope sent commands to the probe, including those to separate it from its carrier rocket and turn on its more powerful transmitter when the probe was eight million miles away. It received data from the probe, the only telescope in the world capable of doing so. In February 1966, Jodrell Bank was asked by the Soviet Union to track its unmanned Moon lander Luna 9 and recorded on its facsimile transmission of photographs from the Moon's surface. The photographs were sent to the British press and published before the Soviets made them public. In 1969, the Soviet Union's Luna 15 was also tracked. A recording of the moment when Jodrell Bank's scientists observed the mission was released on 3 July 2009. With the support of Sir Bernard Lovell, the telescope tracked Russian satellites. Satellite and space probe observations were shared with the US Department of Defense satellite tracking research and development activity at Project Space Track. Tracking space probes only took a fraction of the Lovell telescope's observing time, and the remainder was used for scientific observations including using radar to measure the distance to the Moon and to Venus; observations of astrophysical masers around star-forming regions and giant stars; observations of pulsars (including the discovery of millisecond pulsars and the first pulsar in a globular cluster); and observations of quasars and gravitational lenses (including the detection of the first gravitational lens and the first Einstein ring). The telescope has also been used for SETI observations. Mark II and III telescopes The Mark II telescope is an elliptical radio telescope, with a major axis of and a minor axis of . It was constructed in 1964. As well as operating as a standalone telescope, it has been used as an interferometer with the Lovell Telescope, and is now primarily used as part of the MERLIN project. The Mark III telescope, the same size as the Mark II, was constructed to be transportable but it was never moved from Wardle, near Nantwich, where it was used as part of MERLIN. It was built in 1966 and decommissioned in 1996. Mark IV, V and VA telescope proposals The Mark IV, V and VA telescope proposals were put forward in the 1960s through to the 1980s to build even larger radio telescopes. The Mark IV proposal was for a diameter standalone telescope, built as a national project. The Mark V proposal was for a moveable telescope. The concept of this proposal was for a telescope on a railway line adjoining Jodrell Bank, but concerns about future levels of interference meant that a site in Wales would have been preferable. Design proposals by Husband and Co and Freeman Fox, who had designed the Parkes Observatory telescope in Australia, were put forward. The Mark VA was similar to the Mark V but with a smaller dish of and a design using prestressed concrete, similar to the Mark II (the previous two designs more closely resembled the Lovell telescope). None of the proposed telescopes was constructed, although design studies were carried out and scale models were made, partly because of the changing political climate, and partly due to the financial constraints of astronomical research in the UK. Also it became necessary to upgrade the Lovell Telescope to the Mark IA, which overran in terms of cost. Other single dishes A 50 ft (15 m) alt-azimuth dish was constructed in 1964 for astronomical research and to track the Zond 1, Zond 2, Ranger 6 and Ranger 7 space probes and Apollo 11. After an accident that irreparably damaged the 50 ft telescope's surface, it was demolished in 1982 and replaced with a more accurate telescope, the "42 ft". The 42 ft (12.8 m) dish is mainly used to observe pulsars, and continually monitors the Crab Pulsar. When the 42 ft was installed, a smaller dish, the "7 m" (actually 6.4 m, or 21 ft, in diameter) was installed and is used for undergraduate teaching. The 42 ft and 7 m telescopes were originally used at the Woomera Rocket Testing Range in South Australia. The 7 m was originally constructed in 1970 by the Marconi Company. A Polar Axis telescope was built in 1962. It had a circular 50 ft (15.2 m) dish on a polar mount, and was mostly used for moon radar experiments. It has been decommissioned. An reflecting optical telescope was donated to the observatory in 1951 but was not used much, and was donated to the Salford Astronomical Society around 1971. MERLIN The Multi-Element Radio Linked Interferometer Network (MERLIN) is an array of radio telescopes spread across England and the Welsh borders. The array is run from Jodrell Bank on behalf of the Science and Technology Facilities Council as a National Facility. The array consists of up to seven radio telescopes and includes the Lovell Telescope, the Mark II, Cambridge, Defford, Knockin, Darnhall, and Pickmere (previously known as Tabley). The longest baseline is and MERLIN can operate at frequencies between 151 MHz and 24 GHz. At a wavelength of 6 cm (5 GHz frequency), MERLIN has a resolution of 50 milliarcseconds which is comparable to that of the HST at optical wavelengths. Very Long Baseline Interferometry Jodrell Bank has been involved with Very Long Baseline Interferometry (VLBI) since the late 1960s; the Lovell telescope took part in the first transatlantic interferometer experiment in 1968, with other telescopes at Algonquin and Penticton in Canada. The Lovell Telescope and the Mark II telescopes are regularly used for VLBI with telescopes across Europe (the European VLBI Network), giving a resolution of around 0.001 arcseconds. Square Kilometre Array In April 2011, Jodrell Bank was named as the location of the control centre for the planned Square Kilometre Array, or SKA Project Office (SPO). The SKA is planned by a collaboration of 20 countries and when completed, is intended to be the most powerful radio telescope ever built. In April 2015 it was announced that Jodrell Bank would be the permanent home of the SKA headquarters for the period of operation expected for the telescope (over 50 years). Research The Jodrell Bank Centre for Astrophysics, of which the Observatory is a part, is one of the largest astrophysics research groups in the UK. About half of the research of the group is in the area of radio astronomy – including research into pulsars, the Cosmic Microwave Background Radiation, gravitational lenses, active galaxies and astrophysical masers. The group also carries out research at different wavelengths, looking into star formation and evolution, planetary nebula and astrochemistry. The first director of Jodrell Bank was Bernard Lovell, who established the observatory in 1945. He was succeeded in 1980 by Sir Francis Graham-Smith, followed by Professor Rod Davies around 1990 and Professor Andrew Lyne in 1999. Professor Phil Diamond took over the role on 1 October 2006, at the time when the Jodrell Bank Centre for Astrophysics was formed. Prof Ralph Spencer was Acting Director during 2009 and 2010. In October 2010, Prof. Albert Zijlstra became Director of the Jodrell Bank Centre for Astrophysics. Professor Lucio Piccirillo was the Director of the Observatory from Oct 2010 to Oct 2011. Prof. Simon Garrington is the JBCA Associate Director for the Jodrell Bank Observatory. In 2016, Prof. Michael Garrett was appointed as the inaugural Sir Bernard Lovell chair of Astrophysics and Director of Jodrell Bank Centre for Astrophysics. As Director JBCA, Prof. Garrett also has overall responsibility for Jodrell Bank Observatory. In May 2017 Jodrell Bank entered into a partnership with the Breakthrough Listen initiative and will share information with Jodrell Bank's team, who wish to conduct an independent SETI search via its 76-m radio telescope and e-MERLIN array. There is an active development programme researching and constructing telescope receivers and instrumentation. The observatory has been involved in the construction of several Cosmic Microwave Background experiments, including the Tenerife Experiment, which ran from the 1980s to 2000, and the amplifiers and cryostats for the Very Small Array. It has also constructed the front-end modules of the 30 and 44 GHz receivers for the Planck spacecraft. Receivers were also designed at Jodrell Bank for the Parkes Telescope in Australia. Visitor facilities, and events A visitors' centre, opened on 19 April 1971 by the Duke of Devonshire, attracted around 120,000 visitors per year. It covered the history of Jodrell Bank and had a planetarium and 3D theatre hosting simulated trips to Mars. Asbestos in the visitors' centre buildings led to its demolition in 2003 leaving a remnant of its far end. A marquee was set up in its grounds while a new science centre was planned. The plans were shelved when Victoria University of Manchester and UMIST merged to become the University of Manchester in 2004, leaving the interim centre, which received around 70,000 visitors a year. In October 2010, work on a new visitor centre started and the Jodrell Bank Discovery Centre opened on 11 April 2011. It includes an entrance building, the Planet Pavilion, a Space Pavilion for exhibitions and events, a glass-walled cafe with a view of the Lovell Telescope and an outdoor dining area, an education space, and landscaped gardens including the Galaxy Maze. A large orrery was installed in 2013. It does not, however, include a planetarium, though a small inflatable planetarium dome has been in use on the site in recent years. The visitor centre is open Tuesday to Sunday and Mondays during school and bank holidays and organises public outreach events, including public lectures, star parties, and "ask an astronomer" sessions. A path around the Lovell telescope is approximately 20 m from the telescope's outer railway, information boards explain how the telescope works and the research that is done with it. The arboretum, created in 1972, houses the UK's national collections of crab apple Malus and mountain ash Sorbus species, and the Heather Society's Calluna collection. The arboretum also has a small scale model of the Solar System, the scale is approximately 1:5,000,000,000. At Jodrell Bank, as part of the SpacedOut project, is the Sun in a 1:15,000,000 scale model of the Solar System covering Britain. On 7 July 2010, it was announced that the observatory was being considered for the 2011 United Kingdom Tentative List for World Heritage Site status. It was announced on 22 March 2011 that it was on the UK government's shortlist. In January 2018, it became the UK's candidate for World Heritage status. In July 2011 the visitor centre and observatory hosted "Live from Jodrell Bank - Transmission 001" – a rock concert with bands including The Flaming Lips, British Sea Power, Wave Machines, OK GO and Alice Gold. On 23 July 2012, Elbow performed live at the observatory and filmed a documentary of the event and the facility which was released as a live CD/DVD of the concert. On 6 July 2013, Transmission 4 featured Australian Pink Floyd, Hawkwind, The Time & Space Machine and The Lucid Dream. On 7 July 2013, Transmission 5 featured New Order, Johnny Marr, The Whip, Public Service Broadcasting, Jake Evans and Hot Vestry. On 30 August 2013, Transmission 6 featured Sigur Ros, Polca and Daughter. On 31 August 2013, Jodrell Bank hosted a concert performed by the Hallé Orchestra to commemorate what would have been Lovell's 100th birthday. As well as a number of operatic performances during the day, the evening Halle performance saw numbers such as themes from Star Trek, Star Wars and Doctor Who among others. The main Lovell telescope was rotated to face the onlooking crowd and used as a huge projection screen showing various animated planetary effects. During the interval the 'screen' was used to show a history of Lovell's work and Jodrell Bank. There is an astronomy podcast from the observatory, named The Jodcast. The BBC television programme Stargazing Live was hosted in the control room of the observatory from 2011 to 2016. Since 2016, the observatory hosted Bluedot, a music and science festival, featuring musical acts such as Public Service Broadcasting, The Chemical Brothers, as well as talks by scientists and scientific communicators such as Jim Al-Khalili and Richard Dawkins. Threat of closure On 3 March 2008, it was reported that Britain's Science and Technology Facilities Council (STFC), faced with an £80 million shortfall in its budget, was considering withdrawing its planned £2.7 million annual funding of Jodrell Bank's e-MERLIN project. The project, which aimed to replace the microwave links between Jodrell Bank and a number of other radio telescopes with high-bandwidth fibre-optic cables, greatly increasing the sensitivity of observations, was seen as critical to the survival of the facility. Bernard Lovell said "It will be a disaster … The fate of the Jodrell Bank telescope is bound up with the fate of e-MERLIN. I don't think the establishment can survive if the e-MERLIN funding is cut". On 9 July 2008, it was reported that, following an independent review, STFC had reversed its initial position and would now guarantee funding of £2.5 million annually for three years. Fictional references Jodrell Bank has been mentioned in several works of fiction, including Doctor Who (The Tenth Planet, Remembrance of the Daleks, "The Poison Sky", "The Eleventh Hour", "Spyfall") and Birthday Boy by David Baddiel. It was intended to be a filming location for Logopolis (Tom Baker's final Doctor Who serial) but budget restrictions prevented this and another location with a superimposed model of a radio telescope was used instead. It was also mentioned in The Hitchhiker's Guide to the Galaxy (as well as The Hitchhiker's Guide to the Galaxy film), The Creeping Terror and Meteor. Jodrell Bank was also featured heavily in the 1983 music video "Secret Messages" by Electric Light Orchestra and also "Are We Ourselves?" by The Fixx. The Prefab Sprout song Technique (from debut album Swoon) opens with the line "Her husband works at Jodrell Bank/He's home late in the morning". The observatory is the site of several episodes in the novel Boneland by the local novelist Alan Garner (2012), and the central character, Colin Whisterfield, is an astrophysicist on its staff. Jodrell bank made an appearance in the CBBC series Bitsa. Appraisal Since 13 July 1988 the Lovell Telescope has been designated as a Grade I listed building. On 10 July 2017 the Mark II Telescope was also designated at the same grade. On the same date five other buildings on the site were designated at Grade II; namely the Searchlight Telescope, the Control Building, the Park Royal Building, the Electrical Workshop, and the Link Hut. Grade I is the highest of the three grades of listing, and is applied to buildings that are of "exceptional interest", and Grade II, the lowest grade, is applied to buildings "of special interest". At the 43rd Session of the UNESCO World Heritage Committee in Baku on 7 July 2019, the Jodrell Bank Observatory was adopted as a World Heritage Site on the basis of 4 criteria Criterion (i): Jodrell Bank Observatory is a masterpiece of human creative genius related to its scientific and technical achievements. Criterion (ii): Jodrell Bank Observatory represents an important interchange of human values over a span of time and on a global scale on developments Criterion (iv): Jodrell Bank Observatory represents an outstanding example of a technological ensemble which illustrates a significant stage in human history Criterion (vi): Jodrell Bank Observatory is directly and tangibly associated with events and ideas of outstanding universal significance. See also Cerro Tololo Inter-American Observatory Extremely Large Telescope Fabra Observatory Griffith Observatory La Silla Observatory Llano de Chajnantor Observatory Paranal Observatory Very Large Telescope List of World Heritage Sites in the United Kingdom References Books Gunn, A. G. (2005). "Jodrell Bank and the Meteor Velocity Controversy". In The New Astronomy: Opening the Electromagnetic Window and Expanding Our View of Planet Earth, Volume 334 of the Astrophysics and Space Science Library. Part 3, pages 107–118. Springer Netherlands. Journal articles External links Jodrell Bank Centre for Astrophysics Jodrell Bank Visitor Centre Jodrell Bank Observatory Archives at University of Manchester Library. Radio observatories Astronomical observatories in England Astronomy institutes and departments Tourist attractions in Cheshire 1945 establishments in the United Kingdom Arboreta in England Botanical gardens in England Gardens in Cheshire Space programme of the United Kingdom Square Kilometre Array World Heritage Sites in England Buildings at the University of Manchester
Jodrell Bank Observatory
Astronomy
4,676
66,497,133
https://en.wikipedia.org/wiki/Bruceanol%20B
Bruceanol B is a cytotoxic quassinoid isolated from Brucea antidysenterica with potential antitumor and antileukemic properties. See also Bruceanol References Quassinoids
Bruceanol B
Chemistry
48
10,045,226
https://en.wikipedia.org/wiki/Large%20Scale%20Concept%20Ontology%20for%20Multimedia
The Large-Scale Concept Ontology for Multimedia project was a series of workshops held from April 2004 to September 2006 for the purpose of defining a standard formal vocabulary for the annotation and retrieval of video. Mandate The Large-Scale Concept Ontology for Multimedia project was sponsored by the Disruptive Technology Office and brought together representatives from a variety of research communities, such as multimedia learning, information retrieval, computational linguistics, library science, and knowledge representation, as well as "user" communities such as intelligence agencies and broadcasters, to work collaboratively towards defining a set of 1,000 concepts. Individually, each concept was to meet the following criteria: Utility: the concepts must support realistic video retrieval problems Feasibility: the concepts are capable or will be capable of detection given the near-term (5 year projected) state of technology Observibility: the concepts occur with relatively high frequency in actual video data sets Jointly, these concepts were to meet the additional criterion of providing broad (domain independent) coverage. High-level target areas for coverage included physical objects, including animate objects (such as people, mobs, and animals), and inanimate objects, ranging from large-scale (such as buildings and highways) to small-scale (such as telephones and appliances); actions and events; locations and settings; and graphics. The effort was led by Dr. Milind Naphade, who was the principal investigator along with researchers from Carnegie Mellon University, Columbia University, and IBM. Development tracks The project had two main "tracks": the development and deployment of keyframe annotation tools (performed by CMU and Columbia), and the development of the Large-Scale Concept Ontology for Multimedia concept hierarchy itself. The second track was executed in two phases: The first consisted in the manual construction of an 884 concept hierarchy, was performed collaboratively among the research and user community representatives. The second track, performed by knowledge representation experts at Cycorp, Inc., involved the mapping of the concepts into the Cyc knowledge base and the use of the Cyc inference engine to semi-automatically refine, correct, and expand the concept hierarchy. The mapping/expansion phase of the project was motivated by a desire to increase breadth—the mapping had the effect of moving from 884 concepts to well past the initial goal of 1000—and to move Large-Scale Concept Ontology for Multimedia from a one-dimensional hierarchy of concepts, to a full-blown ontology of rich semantic connections. Project results The outputs of the effort included: A "lite" version of the Large-Scale Concept Ontology for Multimedia concept hierarchy consisting of a subset of 449 concepts. A corpus of 61,901 video keyframes, taken from the 2006 TRECVID data set, annotated using Large-Scale Concept Ontology for Multimedia "lite." The full taxonomy of 2,638 concepts, built semi-automatically by mapping 884 concepts, manually identified by collaborators, into the Cyc knowledge base, and querying the Cyc inference engine for useful additions. The full ontology, in the form of a 2006 ResearchCyc release that contained the Large-Scale Concept Ontology for Multimedia mappings into the Cyc ontology. Public detectors Several sets of concept detectors were developed and released for public use: VIREO-374, 374 detectors developed by City University of Hong Kong. Columbia374, 374 detectors developed by Columbia University. Mediamill101, 101 detectors developed by The University of Amsterdam. Use in the larger research community Since its release, Large-Scale Concept Ontology for Multimedia has begun to be used successfully in visual recognition research: Apart from research done by project participants, it has been used by independent research in concept extraction from images, and has served as the basis for a video annotation tool. See also Multimedia Web Ontology Language (MOWL) References External links Large-Scale Concept Ontology for Multimedia homepage Multimedia
Large Scale Concept Ontology for Multimedia
Technology
808
56,789,136
https://en.wikipedia.org/wiki/Tonkin%20weasel
The Tonkin weasel or Vietnamese mountain weasel (Mustela tonkinensis) is a species of weasel described by Björkegren in 1941. It is known only from a singular specimen collected from an undisclosed location in Northern Vietnam. Originally believed to be a form of either the least weasel or the yellow-bellied weasel, the species was distinguished as a separate variety on the basis of skull differences by Groves in 2007. Description A standard-sized weasel, the Tonkin weasel measures between 20 and 25 centimetres in body length, with a tail length of between 10 and 11 centimetres. The upper section of the body is medium brown, while the throat, chest and stomach are white in colour. The colouring of the fur is regarded as 'vulgaris-type', which is characterised by an indented demarcation line between the areas of brown and white colour in both the neck and trunk regions. The weasel is distinguished from other species by the size of the narrow skull. Distribution and habitat The recorded specimen is believed to have originated from a mountain range within the Hoàng Liên National Park in the Lào Cai Province. Although Björkegren initially recorded the location of the specimen as in close proximity Sa Pa, it has been concluded by Abramov that the point of origin was more likely to have been from Seo My Ty to the southwest of the town. Thereby, it is probable that the species, if extant, survives within temperate fokienia forest of the sub-alpine highlands of Northern Vietnam. Behaviour In association with other endemic species, it is probable that the Tonkin weasel consumes a similar carnivorous diet. Therefore, it is likely that their diet may consist of birds, insects and other rodents, including Père David's vole and the Eurasian harvest mouse. Despite adequate abilities in relation to climbing, it is unlikely that arboreal and scansorial animals would form a portion of their diet. Population The population of the species remains unknown, as it has only been recorded on a single occasion when caught in 1939. It remains possible that concentrations of the population still exist within the higher altitude areas of the Indochina Peninsula, as there is no evidence that, despite extensive land clearing in the region, the weasel is in any way dependent on the temperate forest ecosystem. Despite surveys between 2005 and 2012 which involved numerous discussions with the local Hmong people and forest rangers, no supplementary sightings of the weasel have been reported to date. Threats The weasel is believed to be located within a region where hunting in all forms is relatively intense, and it is therefore tenable to suggest that the species may be in decline. The placement of traps to catch other rodents and birds in the highlands may also place the weasel in inadvertent danger. In addition, much of the suggested area of habitation has been prone to fragmentation for agricultural purposes. References Mammals of Asia Weasels Mammals described in 1941 Species known from a single specimen
Tonkin weasel
Biology
589
10,930,626
https://en.wikipedia.org/wiki/Description%20error
A description error or selection error is an error, or more specifically a human error, that occurs when a person performs the correct action on the wrong object due to insufficient specification of an action which would have led to a desired result. This commonly happens when similar actions lead to different results. A typical example is a panel with rows of identical switches, where it is easy to carry out a correct action (flip a switch) on a wrong switch due to their insufficient differentiation. This error can be very disorienting and usually causes a brief loss of situation awareness or automation surprise if noticed right away. But much worse, if it goes unnoticed, it could cause more serious problems. So allowances such as clearly highlighting a selected item should be made in interaction design. Donald Norman describes the subject in his book The Design of Everyday Things. There he describes how user-centered design can help account for human limitations that can lead to errors like description errors. James Reason also covers the subject in his book Human Error. References External links Reducing control selection errors associated with underground bolting equipment Human behavior
Description error
Engineering,Biology
219
21,710,054
https://en.wikipedia.org/wiki/Polychloro%20phenoxy%20phenol
Polychloro phenoxy phenols (polychlorinated phenoxy phenols, PCPPs) are a group of organic polyhalogenated compounds. Among them include triclosan and predioxin which can degrade to produce certains types of dioxins and furans. Notably, however, the particular dioxin formed by degradation of triclosan, 2,8-DCDD, was found to be non-toxic in fish embryos. References Chloroarenes Incineration Phenols Ethers
Polychloro phenoxy phenol
Chemistry,Engineering
117
28,734,047
https://en.wikipedia.org/wiki/Animalia%20Paradoxa
(Latin for "contradictory animals"; cf. paradox) are the mythical, magical or otherwise suspect animals mentioned in the first five editions of Carl Linnaeus's seminal work under the header "Paradoxa". It lists fantastic creatures found in medieval bestiaries and some animals reported by explorers from abroad and explains why they are excluded from Systema Naturae. According to Swedish historian Gunnar Broberg, it was to offer a natural explanation and demystify the world of superstition. Paradoxa was dropped from Linnaeus' classification system as of the 6th edition (1748). Paradoxa These 10 taxa appear in the 1st to 5th editions: Hydra: Linnaeus wrote: "Hydra: body of a snake, with two feet, seven necks and the same number of heads, lacking wings, preserved in Hamburg, similar to the description of the Hydra of the Apocalypse of St.John chapters 12 and 13. And it is provided by very many as a true species of animal, but falsely. Nature for itself and always the similar, never naturally makes multiple heads on one body. Fraud and artifice, as we ourselves saw [on it] teeth of a weasel, different from teeth of an Amphibian [or reptile], easily detected." See Carl Linnaeus#Doctorate. (Distinguish from the small real coelenterate Hydra (genus).) Rana-Piscis: a South American frog which is significantly smaller than its tadpole stage; it was thus (incorrectly) reported to Linnaeus that the metamorphosis in this species went from 'frog to fish'. In the Paradoxa in the 1st edition of Systema Naturae, Linnaeus wrote "Frog-Fish or Frog Changing into Fish: is much against teaching. Frogs, like all Amphibia, delight in lungs and spiny bones. Spiny fish, instead of lungs, are equipped with gills. Therefore the laws of Nature will be against this change. If indeed a fish is equipped with gills, it will be separate from the Frog and Amphibia. If truly [it has] lungs, it will be a Lizard: for under all the sky it differs from Chondropterygii and Plagiuri." In the 10th edition of Systema Naturae, Linnaeus named the species Rana paradoxa, though its genus name was changed in 1830 to Pseudis. Monoceros (unicorn): Linnaeus wrote: "Monoceros of the older [generations], body of a horse, feet of a "wild animal", horn straight, long, spirally twisted. It is a figment of painters. The Monodon of Artedi [= narwhal] has the same manner of horn, but the other parts of its body are very different." Pelecanus: Linnaeus wrote "Pelican: The same [sources as for the previous] hand down fabulously [the story] that it inflicts a wound with its beak on its own thigh, to feed its young with the flowing blood. A sack hanging below its throat gave a handle for the story." This source writes: "Linnaeus thought [pelicans] might reflect the over-fervent imaginations of New World explorers." This claim is incorrect; pelicans are widespread in Europe and Linnaeus was merely doubting the legendary behavior. Satyrus: Linnaeus wrote "with a tail, hairy, bearded, with a manlike body, gesticulating much, very fallacious, is a species of monkey, if ever one has been seen." Borometz (aka Scythian Lamb): Linnaeus wrote: "Borometz or Scythian Lamb: is reckoned with plants, and is similar to a lamb; whose stalk coming out of the ground enters an umbilicus; and the same is said to be provided with blood from by chance devouring wild animals. But it is put together artificially from roots of American ferns. But naturally it is an allegorical description of an embryo of a sheep, as has all attributed data.". This source says: "Linnaeus [...] had seen a faked vegetable lamb taken from China to Sweden by a traveler." Phoenix: Linnaeus wrote: "Species of bird, of which only one individual exists in the world, and which when decrepit [arises?] from [its] pyre made of aromatic [plants?] is said fabulously to become again young, to undergo happy former periods of life. In reality it is the date palm, see Kæmpf". Linnaeus wrote: The Bernicla or Scottish goose & Goose-bearing Seashell: is believed by former generations to be born from rotten wood thrown away in the sea. But the Lepas places seaweed on its featherlike internal parts, and somewhat adhering, as if indeed that goose Bernicla was arising from it. Frederick Edward Hulme noted: "[The] barnacle-goose tree was a great article of faith with our ancestors in the Middle Ages." Draco: Linnaeus wrote that it has a "snakelike body, two feet, two wings, like a bat, which is a winged lizard or a ray artificially shaped as a monster and dried." See also Jenny Haniver. Automa Mortis Linnaeus wrote "Death-watch: It produces the sound of a very small clock in walls, is named Pediculus pulsatorius, which perforates wood and lives in it". The above 10 taxa and the 4 taxa following were in the 2nd (1740) edition and the 4th and 5th editions (total 14 entries): Manticora: Linnaeus wrote merely: "face of a decrepit old man, body of a lion, tail starred with sharp points". Antilope : Linnaeus wrote merely: "Face of a "wild animal", feet [like those] of cattle, horns like a goat's [but] saw-edged". Lamia: Linnaeus wrote merely: "Face of a man, breasts of a virgin, body of a four-footed animal [but] scaled, forefeet of a "wild animal", hind[feet] [like those] of cattle". Siren: Linnaeus wrote: "Art. gen. 81 Syrene Bartol: As long as it is not seen either living or dead, nor faithfully and perfectly described, it is called in doubt". Linnaeus's reference is to Peter Artedi's writing about the Siren: "Two fins only on all the body, those on the chest. No finned tail. Head and neck and chest to the umbilicus have the human appearance. ... Our or Bartholin's Siren was found and captured in the sea near Massilia in America. From the umbilicus to the extremity of the body was unformed flesh with no sign of a tail. Two pectoral fins on the chest, with five bones or fingers, staying together, by which it swims. Its radius in the forearm is scarcely four fingers' width long. Oh that there could arise a true ichthyologist, who could examine this animal, as to whether it is a fable, or a true fish? About something which has not been seen it is preferable not to judge, than boldly to pronounce something.". Among references and quotations from other authors Artedi quoted that "some say that it is a manatee and others say completely different." References External links Biological classification Cryptozoology European legendary creatures Medieval European legendary creatures Systema Naturae
Animalia Paradoxa
Biology
1,553
44,818,034
https://en.wikipedia.org/wiki/Truss%20%28unit%29
A truss is a tight bundle of hay or straw. It would usually be cuboid, for storage or shipping, and would either be harvested into such bundles or cut from a large rick. Markets and law Hay and straw were important commodities in the pre-industrial era. Hay was required as fodder for animals, especially horses, and straw was used for a variety of purposes including bedding. In London, there were established markets for hay at Smithfield, Whitechapel and by the village of Charing, which is still now called the Haymarket. The weight of trusses was regulated by law and statutes were passed in the reigns of William III and Mary II, George II and George III. The latter act, the Hay and Straw Act 1796 (36 Geo. 3. c. 88), established the weights as follows: In summary then, the standard weights of a truss were: new hay, 60 pounds old hay, 56 pounds straw, 36 pounds and 36 trusses made up a load. Trussing A detailed description was provided in British Husbandry, sponsored by the Society for the Diffusion of Useful Knowledge, Carriage The London hay-cart may have been purpose-made to carry a load of 36 trusses. John French Burke wrote in 1834, Consumption British army regulations in 1799 specified standard rations of trusses. These were one truss of straw for each two soldiers, to stuff their palliasses. Half a truss was provided after sixteen days to refresh this and the whole was then changed after 32 days. Five trusses of straw were provided for each company every sixteen days for the batmen and washerwomen, who did not have palliasses. Thirty trusses of straw were provided per company when they took the field to thatch the huts of the washerwomen. Notes References Customary units of measurement Units of mass
Truss (unit)
Physics,Mathematics
378
25,244,065
https://en.wikipedia.org/wiki/La%20Pedrera%20de%20R%C3%BAbies%20Formation
The La Pedrera de Rúbies Formation, also called as La Pedrera de Meià is an Early Cretaceous (late Berriasian to early Barremian geologic formation in Catalonia, Spain. The formation crops out in the area of the Montsec in the Organyà Basin. At the La Pedrera de Meià locality, the formation consists of rhythmically laminated, lithographic limestones that formed in the distal areas of a large, shallow coastal lake. It is noted for the exceptional preservation of articulated small vertebrates and insects, similar to that of the Solnhofen Limestone. Fossil content The La Pedrera de Rúbies Formation has yielded the enantiornithine bird Noguerornis and the scincogekkomorph lizard Pedrerasaurus, and two species of Teiid lizard Meyasaurus, M. fauri and M. crusafonti, the indeterminate avialan Ilerdopteryx, frogs Neusibatrachus wilferti, Eodiscoglossus santonjae and Montsechobatrachus. A crocodyliform Montsecosuchus and many insects and other arthropods, as: Angarosphex lithographicu Archisphex catalunicus Artitocoblatta hispanica Chalicoridulum montsecensis Chrysobothris ballae Cionocoleus longicapitis Condalia woottoni Cretephialtites pedrerae Hirmoneura (Eohirmoneura) neli Hirmoneura richterae Iberoraphidia dividua Ilerdocossus pulcherrima Ilerdosphex wenzae Jarzembowskia edmundi Leridatoma pulcherrima Manlaya lacabrua Meiagaster cretaceus Meiatermes bertrani Mesoblattina colominasi Mesopalingea lerida Mimamontsecia cretacea Montsecbelus solutus Nanoraphidia lithographica Nogueroblatta fontllongae N. nana Pachypsyche vidali Pompilopterus montsecensis Proraphidia gomezi Prosyntexis montsecensis Pseudochrysobothris ballae Ptiolinites almuthae Vitisma occidentalis Cretaholocompsa montsecana Montsecosphex jarzembowskii Cretobestiola hispanica Angarosphex penyalveri Cretoserphus gomezi Bolbonectus lithographicus ?Anaglyphites pluricavus Palaeaeschna vidali Hispanochlorogomphus rossi Palaeouloborus lacasae Ichthyemidion vivaldi Correlation See also List of dinosaur-bearing rock formations List of stratigraphic units with few dinosaur genera Tremp Formation Baltic, Burmese, Dominican, Mexican amber References Bibliography Further reading A. P. Rasnitsyn and J. Ansorge. 2000. Two new Lower Cretaceous hymenopterous insects (Insecta: Hymenoptera) from Sierra del Montsec, Spain. Acta Geológica Hispánica 35:59-64 X. Martínez-Delclòs. 1993. Blátidos (Insecta, Blattodea) del Cretácico Inferior de España. Familias Mesoblattinidae, Blattulidae y Poliphagidae. Boletín Geológico y Minero 104:516-538 X. Martínez-Delclòs. 1990. Insectos del Cretácico inferior de Santa Maria de Meià (Lleida): colleción Lluís Marià Vidal i Carreras. Treballs del Museu de Geologia de Barcelona 1:91-116 P. E. S. Whalley and E. A. Jarzembowski. 1985. Fossil insects from the Lithographic Limestone Montsech (Late Jurassic-early Cretaceous), Lérida Province, Spain. Bulletin of the British Museum of Natural History (Geology) 38(5):381-412 J. E. Gomez Pallerola. 1979. Un ave y otras especies fósiles nuevas de la biofacies de Santa María de Meyá (Lérida). Boletín Geológico y Minero 90:333-346 Geologic formations of Spain Cretaceous Spain Lower Cretaceous Series of Europe Barremian Stage Hauterivian Stage Valanginian Stage Berriasian Stage Limestone formations Lacustrine deposits Formations Paleontology in Spain Formations Formations
La Pedrera de Rúbies Formation
Physics
966
4,489,440
https://en.wikipedia.org/wiki/SSSPM%20J1549-3544
SSSPM J1549-3544 is a star in the constellation Lupus with high proper motion. It was initially found to have high proper motion in a 2003 survey of images taken by the optical SuperCOSMOS Sky Surveys and by the near-infrared sky surveys 2MASS and DENIS. It was then thought to be a cool white dwarf close to the Sun. However, more detailed spectroscopic observations in 2005 appear to show that it is not a white dwarf, but a high-velocity halo metal-poor subdwarf. References Lupus (constellation) K-type subdwarfs
SSSPM J1549-3544
Astronomy
125
37,068,575
https://en.wikipedia.org/wiki/International%20Council%20of%20Chemical%20Associations
The International Council of Chemical Associations (ICCA) is the trade association of the global chemical industry. Its members are both regional trade associations like Cefic or the Gulf Petrochemicals and Chemicals Association, and also national associations including the American Chemistry Council. According to its own figures, ICCA represents chemical companies which account for more than 75% of the global production capacities, making more than US$1.6 trillion in revenues each year. ICCA is the steward of Responsible Care, a voluntary scheme to improve chemical safety among its members. Responsible Care had been launched by the Chemistry Industry Association of Canada in 1985. At the 2006 International Conference on Chemicals Management, Responsible Care was extended through a Global Product Strategy and a Global Charter. Membership Full members of ICCA include: References External links International business organizations Chemistry trade associations
International Council of Chemical Associations
Chemistry
167
844,841
https://en.wikipedia.org/wiki/Intermediate-density%20lipoprotein
Intermediate-density lipoproteins (IDLs) belong to the lipoprotein particle family and are formed from the degradation of very low-density lipoproteins as well as high-density lipoproteins. IDL is one of the five major groups of lipoproteins (chylomicrons, VLDL, IDL, LDL, HDL) that enable fats and cholesterol to move within the water-based solution of the bloodstream. Each native IDL particle consists of protein that encircles various lipids, enabling, as a water-soluble particle, these lipids to travel in the aqueous blood environment as part of the fat transport system within the body. Their size is, in general, 25 to 35 nm in diameter, and they contain primarily a range of triglycerides and cholesterol esters. They are cleared from the plasma into the liver by receptor-mediated endocytosis, or further degraded by hepatic lipase to form LDL particles. Although one might intuitively assume that "intermediate-density" refers to a density between that of high-density and low-density lipoproteins, it in fact refers to a density between that of low-density and very-low-density lipoproteins. In general, IDL, somewhat similar to low-density lipoprotein (LDL), transports a variety of triglyceride fats and cholesterol and, like LDL, can also promote the growth of atheroma. VLDL is a large, triglyceride-rich lipoprotein secreted by the liver that transports triglyceride to adipose tissue and muscle. The triglycerides in VLDL are removed in capillaries by the enzyme lipoprotein lipase, and the VLDL returns to the circulation as a smaller particle with a new name, intermediate-density lipoprotein (IDL). The IDL particles have lost most of their triglyceride, but they retain cholesteryl esters. Some of the IDL particles are rapidly taken up by the liver; others remain in circulation, where they undergo further triglyceride hydrolysis by hepatic lipase and are converted to LDL. A distinguishing feature of the IDL particle is their content of multiple copies of the receptor ligand ApoE in addition to a single copy of ApoB-100. The multiple copies of ApoE allow IDL to bind to the LDL receptor with a very high affinity. When IDL is converted to LDL, the ApoE leaves the particle and only the ApoB-100 remains. Thereafter, the affinity for the LDL receptor is much reduced. References Lipoproteins
Intermediate-density lipoprotein
Chemistry
581
844,938
https://en.wikipedia.org/wiki/Amiga%20Old%20File%20System
On the Amiga, the Old File System, sometimes also called Amiga File System, was the filesystem for AmigaOS before the Amiga Fast File System. Even though it used 512-byte blocks, it reserved the first small portion of each block for metadata, leaving an actual data block capacity of 488 bytes per block. It wasn't very suitable for anything except floppy disks, and it was soon replaced. History Commonly known as just the Amiga File System, it originally came from the filesystem of TRIPOS, which formed the basis of the first versions of AmigaDOS. It received the nickname of "Old" or "Original" File System when Fast File System was released with AmigaOS 1.3. OFS is very good for repairing the filesystem in the event of a problem, although the so-called DiskDoctor provided by Commodore quickly earned the name DiskDestroyer, because it could not repair No-DOS type autostart disks provided by third-party software manufacturers as bootable disks for games. The idea to create non-standard autobootable disks was born in a primitive attempt to prevent copy of such disks and to avoid the loading and launch of Amiga DOS, in order to directly access the Amiga graphic, audio and memory chipsets. DiskDoctor in fact changed autostart disks bootblocks into standard AmigaDOS-based ones, renaming a disk with "Lazarus" namedisk, and made the autostart disk unusable. Characteristics of AmigaDOS Floppy Disks Amiga uses MFM encoding/decoding by default when handling floppy disks. There are 80 cylinders on an Amiga floppy disk. Each cylinder has 2 MFM tracks, one on each side of the disk. Double density (DD) disks have 11 sectors per MFM track, high density (HD) disks have 22 sectors. The geometry of an Amiga floppy disk is as follows: DD disks: 512 bytes/sector, 11 sector/track, 2 track/cyl, 80 cyl/disk HD disks: 512 bytes/sector, 22 sector/track, 2 track/cyl, 80 cyl/disk The DD disk has 11 * 2 * 80 = 1760 (0 to 1759) blocks, while the HD disk has 22 * 2 * 80 = 3520 blocks. Amiga stores 880 KiB on a DD disk and 1760 KiB on an HD floppy disk. Characteristics of Files under AmigaDOS Prior to AmigaOS 3.5, AmigaDOS file handles maintained a 32-bit wide offset parameter (unsigned), telling where to start the next read or write operation. The biggest size for any single Amiga file under these operating systems therefore comes to 232 = 4 GiB. After Amiga OS 3.5, file handles may reference 264 = 16 EiB files. However, OFS-formatted disks continue to retain the 32-bit limitations, for that is an intrinsic limitation of the format as recorded on the media. An OFS datablock stores block size BSIZE-24 bytes (i.e. normally 488 bytes at most frequently used BSIZE of 512 bytes). The rootblock is located at the physical middle of the media: block number 880 for DD disks, block 1760 for HDs. This helps minimize seek times. The exact calculation for where it is stored is as follows: numCyls = highCyl - lowCyl + 1 highKey = numCyls * numSurfaces * numBlocksPerTrack - 1 rootKey = INT (numReserved + highKey) / 2 The rootblock contains information about the disk: its name, its formatting date, etc. It also contains information on accessing the files/directories/links located at the uppermost (root) directory. The characters '/' and ':' are forbidden in file and volume names, but *!@#$%|^+&_()=\-[]{}';",<>.? and letters with diacritical marks like âè are allowed. The date fields in the root block (and other blocks) are structured in the form of DAYS, MINS and TICKS. The DAYS field contains the number of days since January 1. 1978. MINS is the number of minutes that have passed since midnight and TICKS are expressed in 1/50s of a second. A day value of zero is considered illegal by most programs. Since the DAYS value is stored as a 32-bit number, the Amiga filesystem does not have an inherent Year 2000 problem or Year 2038 problem. To reach a file, directory or link, AmigaDOS uses a hash function to calculate which 32-bit word in the disk block to use as a pointer to a hash bucket list, which in turn contains the file, directory, or link record. A bucket list is used to support filesystem objects with names that hash to the same offset. For example: file_1a, file_24 and file_5u have the same hash value. Filename characters can be lowercase and uppercase, but are not case sensitive when accessed. That is to say, "MyFile" and "myfile" in the same directory refer to the same file. Files are composed of a file header block, which contains information about the file (size, last access time, data block pointers, etc.), and the data blocks, which contain the actual data. The file header block contains up to BSIZE/4-56 data block pointers (which amounts to 72 entries with the usual 512 byte blocks). If a file is larger than that, file extension blocks will be allocated to hold the data block pointers. File extension blocks are organised in a linked list, which starts in the file header block ('extension' field). See also Amiga Fast File System Professional File System Smart File System List of file systems Rigid Disk Block External links The ADF specs in LHA format, from Aminet Disk file systems Amiga AmigaOS
Amiga Old File System
Technology
1,229
43,371,589
https://en.wikipedia.org/wiki/Callistosporium%20luteo-olivaceum
Callistosporium luteo-olivaceum is a species of agaric fungus in the family Callistosporiaceae. It was originally described in 1859 as Agaricus luteo-olivaceus by Miles Joseph Berkeley and Moses Ashley Curtis in 1859. Rolf Singer transferred it to Callistosporium in 1946. The fungus has an extensive synonymy. Although rare, C. luteo-olivaceum is widely distributed in temperate and tropical areas of Europe and North America. In 2014, it was reported growing in pine forests in Western Himalaya, Pakistan. The species is inedible. The caps are brownish, as are the stipes, which are fibrillose and hollow, with yellowish tomentum near the base. The spores are colorless but produce a yellow color in ammonia. References External links Agaricales Fungi described in 1859 Fungi of Europe Fungi of North America Fungi of Pakistan Inedible fungi Taxa named by Miles Joseph Berkeley Fungus species
Callistosporium luteo-olivaceum
Biology
209
71,980,146
https://en.wikipedia.org/wiki/Glossary%20of%20mycology
This glossary of mycology is a list of definitions of terms and concepts relevant to mycology, the study of fungi. Terms in common with other fields, if repeated here, generally focus on their mycology-specific meaning. Related terms can be found in glossary of biology and glossary of botany, among others. List of Latin and Greek words commonly used in systematic names and Botanical Latin may also be relevant, although some prefixes and suffixes very common in mycology are repeated here for clarity. A B C D E F G H I J K L M N O P R S T U V W X Y Z See also List of mycologists Outline of fungi Outline of lichens Glossary of lichen terms References Bibliography Mycology mycology Wikipedia glossaries using description lists
Glossary of mycology
Biology
163
2,358,741
https://en.wikipedia.org/wiki/Odometry
Odometry is the use of data from motion sensors to estimate change in position over time. It is used in robotics by some legged or wheeled robots to estimate their position relative to a starting location. This method is sensitive to errors due to the integration of velocity measurements over time to give position estimates. Rapid and accurate data collection, instrument calibration, and processing are required in most cases for odometry to be used effectively. The word odometry is composed of the Greek words odos (meaning "route") and metron (meaning "measure"). Example Suppose a robot has rotary encoders on its wheels or on its legged joints. It drives forward for some time and then would like to know how far it has traveled. It can measure how far the wheels have rotated, and if it knows the circumference of its wheels, compute the distance. Train operations are also frequent users of odometrics. Typically, a train gets an absolute position by passing over stationary sensors in the tracks, while odometry is used to calculate relative position while the train is between the sensors. More sophisticated example Suppose a simple robot has two wheels, both capable of moving forward or in reverse, positioned parallel to each other and equidistant from the robot's center. Additionally, each motor has a rotary encoder, allowing determination of whether either wheel has traveled one "unit" forward or reverse along the floor. This unit is defined as the ratio of the wheel's circumference to the encoder's resolution. If the left wheel were to move forward one unit while the right wheel remained stationary, then the right wheel acts as a pivot, and the left wheel traces a circular arc in the clockwise direction. Since one's unit of distance is usually tiny, one can approximate by assuming that this arc is a line. Thus, the original position of the left wheel, the final position of the left wheel, and the position of the right wheel form a triangle, which one can call A. Also, the original position of the center, the final position of the center, and the position of the right wheel form a triangle which one can call B. Since the center of the robot is equidistant to either wheel, and as they share the angle formed at the right wheel, triangles A and B are similar triangles. In this situation, the magnitude of the change of position of the center of the robot is one half of a unit. The angle of this change can be determined using the law of sines. See also Dead reckoning Visual odometry External links Robot control
Odometry
Engineering
538
2,722,325
https://en.wikipedia.org/wiki/Group%20code
In coding theory, group codes are a type of code. Group codes consist of linear block codes which are subgroups of , where is a finite Abelian group. A systematic group code is a code over of order defined by homomorphisms which determine the parity check bits. The remaining bits are the information bits themselves. Construction Group codes can be constructed by special generator matrices which resemble generator matrices of linear block codes except that the elements of those matrices are endomorphisms of the group instead of symbols from the code's alphabet. For example, considering the generator matrix the elements of this matrix are matrices which are endomorphisms. In this scenario, each codeword can be represented as where are the generators of . See also Group coded recording (GCR) References Further reading Coding theory
Group code
Mathematics
164
27,915,795
https://en.wikipedia.org/wiki/Process%20analytical%20chemistry
Process analytical chemistry (PAC) is the application of analytical chemistry with specialized techniques, algorithms, and sampling equipment for solving problems related to chemical processes. It is a specialized form of analytical chemistry used for process manufacturing similar to process analytical technology (PAT) used in the pharmaceutical industry. The chemical processes are for production and quality control of manufactured products, and process analytical technology is used to determine the physical and chemical composition of the desired products during a manufacturing process. It is first mentioned in the chemical literature in 1946(1,2). Process sampling Process analysis initially involved sampling the variety of process streams or webs and transporting samples to quality control or central analytical service laboratories. Time delays for analytical results due to sample transport and analytical preparation steps negated the value of many chemical analyses for purposes other than product release. Over time it was understood that real-time measurements provided timely information about a process, which was far more useful for high efficiency and quality. The development of real-time process analysis has provided information for process optimization during any manufacturing process. The journal Analytical Chemistry (journal) publishes a biennial review of the most recent developments in the field. The first real-time measurements in a production environment were made with modified laboratory instrumentation; in recent times specialized process and handheld instrumentation has been developed for immediate analysis. Applications Process analytical chemistry involves the following sub-disciplines of analytical chemistry: microanalytical systems, nanotechnology, chemical detection, electrochemistry or electrophoresis, chromatography, spectroscopy, mass spectrometry, process chemometrics, process control, flow injection analysis, ultrasound, and handheld sensors. References Further reading McMahon, T.; Wright, E. L. in Analytical Instrumentation: A Practical Guide for Measurement and Control; Sherman, R.E., Rhodes, L. J., Eds.; Instrument Society of America: Research Triangle Park, NC, 1996. Gregory, C. H. (Team Leader); Appleton, H. B.; Lowes, A. P.; Whalen, F. C. Instrumentation & Control in the German Chemical Industry. British Intelligence Operations Subcommittee Report 1007, 12 June 1946 (per discussion with Terry McMahon). Analytical chemistry Microfluidics Electrochemistry Electrophoresis Chromatography Cheminformatics Ultrasound
Process analytical chemistry
Chemistry,Materials_science,Biology
469
69,687,865
https://en.wikipedia.org/wiki/Elburt%20F.%20Osborn
Elburt Franklin Osborn (August 13, 1911 – January 19, 1998) was an American geochemist and educator. He served as the 13th director of the U.S. Bureau of Mines. Early life Elburt Franklin Osborn was born on August 13, 1911, in Kishwaukee, Illinois to Anna (née Sherman) and William Franklin Osborn. Osborn graduated with a Bachelor of Arts in geology from DePauw University in 1932. He received a Master of Science in petrology from Northwestern University in 1934 and a PhD in petrology from the California Institute of Technology in 1938. Career In 1938, Osborn joined the Geophysical Lab at the Carnegie Institution for Science in Washington, D.C. He also served as a consultant of ballistic problems for Division I of the National Defense Research Committee during World War II. His work related to gun barrel erosion and internal ballistics. In 1946, Osborn joined Pennsylvania State University as a professor of geochemistry and chairman of the earth sciences department. He then served as associate dean from 1952 to 1953 and served as dean of the College of Mineral Studies from 1953 to 1958. In 1952, Osborn, along with Thomas Bates, founded the Materials Characterization Laboratory. Osborn and his students have received international recognition for their research in the field of high temperature reactions as applied to iron and steel technology and to volcanic phenomena. He became the vice president for research at Penn State in 1959 and served in that role until 1970. During his 11 years as vice president, he helped to quadruple research budgets at Penn State (from million to million). He remained affiliated with the university until resigning on August 14, 1971. Under his leadership, Penn State introduced the first interdisciplinary curriculum in solid state technology in 1960 and opened the Interdisciplinary Materials Research Laboratory in 1962. Osborn was appointed director of the U.S. Bureau of Mines on October 23, 1970. As director, he helped establish the Pennsylvania Mining and Mineral Resources Research Institute. He left the role on September 30, 1973. In 1973, he became a professor at the geophysical laboratory of Carnegie Institute. From 1978 to 1987, he served as a senior research fellow at Carnegie Institute. In 1974, Osborn was named as the chairman of the National Research Council's Board on Mineral Resources. He served as president of the Mineralogical Society of America in 1960, the American Ceramic Society in 1964, the Geochemical Society in 1967, the Society of Economic Geologists in 1971, and the Geophysical Society of America. He was a fellow of the Geological Society of America, the Mineralogical Society of America, the American Association for the Advancement of Science, the American Geophysical Union and the American Ceramic Society. He was elected as a member of the National Academy of Engineering in 1968 for "advances in ceramics, slag, mineral, and steel technologies". He was also an honorary member of the Canadian Ceramic Society and a member of the National Science Foundation. Personal life and death Osborn married Jean McLeod Thomson on August 12, 1939. Together, they had two children, James and Ian. Osborn died on January 19, 1998, at his home in State College, Pennsylvania. He was interred at Centre County Memorial Park in Centre County, Pennsylvania. Awards Osborn received an honorary Doctor of Science degree from Alfred University in 1965 and an honorary Doctor of Science degree from Ohio State University in 1972. He was awarded the Mineralogical Society of America's Roebling Medal in 1972. Osborn was awarded the American Ceramic Society's Albert Victor Bleininger Memorial Award in 1976. References 1911 births 1998 deaths People from Winnebago County, Illinois DePauw University alumni Northwestern University alumni California Institute of Technology alumni Pennsylvania State University faculty United States Bureau of Mines personnel American geochemists Fellows of the Geological Society of America Fellows of the American Ceramic Society Fellows of the American Association for the Advancement of Science Fellows of the American Geophysical Union Members of the United States National Academy of Engineering Presidents of the Geochemical Society 20th-century American chemists Fellows of the Mineralogical Society of America
Elburt F. Osborn
Chemistry
812
495,236
https://en.wikipedia.org/wiki/Rho%20factor
A ρ factor (Rho factor) is a bacterial protein involved in the termination of transcription. Rho factor binds to the transcription terminator pause site, an exposed region of single stranded RNA (a stretch of 72 nucleotides) after the open reading frame at C-rich/G-poor sequences that lack obvious secondary structure. Rho factor is an essential transcription protein in bacteria. In Escherichia coli, it is a ~274.6 kD hexamer of identical subunits. Each subunit has an RNA-binding domain and an ATP-hydrolysis domain. Rho is a member of the RecA/SF5 family of ATP-dependent hexameric helicases that function by wrapping nucleic acids around a single cleft extending around the entire hexamer. Rho functions as an ancillary factor for RNA polymerase. There are two types of transcriptional termination in bacteria, rho-dependent termination and intrinsic termination (also called Rho-independent termination). Rho-dependent terminators account for about half of the E. coli factor-dependent terminators. Other termination factors discovered in E. coli include Tau and nusA. Rho-dependent terminators were first discovered in bacteriophage genomes. Function A Rho factor acts on an RNA substrate. Rho's key function is its helicase activity, for which energy is provided by an RNA-dependent ATP hydrolysis. The initial binding site for Rho is an extended (~70 nucleotides, sometimes 80–100 nucleotides) single-stranded region, rich in cytosine and poor in guanine, called the rho utilisation site (rut), in the RNA being synthesised, upstream of the actual terminator sequence. Several rho binding sequences have been discovered. No consensus is found among these, but the different sequences each seem specific, as small mutations in the sequence disrupts its function. Rho binds to RNA and then uses its ATPase activity to provide the energy to translocate along the RNA until it reaches the RNA–DNA helical region, where it unwinds the hybrid duplex structure. RNA polymerase pauses at the termination sequence, which is because there is a specific site around 100 nt away from the Rho binding site called the Rho-sensitive pause site. So, even though the RNA polymerase is about 40 nt per second faster than Rho, it does not pose a problem for the Rho termination mechanism as the RNA polymerase allows Rho factor to catch up. In short, Rho factor acts as an ATP-dependent unwinding enzyme, moving along the newly forming RNA molecule towards its 3′ end and unwinding it from the DNA template as it proceeds. Mutations A nonsense mutation in one gene of an operon prevents the translation of subsequent genes in the unit. This effect is called mutational polarity. A common cause is the absence of the mRNA corresponding to the subsequent (distal) parts of the unit. Suppose that there are Rho-dependent terminators within the transcription unit, that is, before the terminator that usually is used. Normally these earlier terminators are not used, because the ribosome prevents Rho from reaching RNA polymerase. But a nonsense mutation releases the ribosome, so that Rho is free to attach to and/or move along the RNA, enabling it to act on RNA polymerase at the terminator. As a result, the enzyme is released, and the distal regions of the transcription unit are never transcribed. Evolution Rho factor has not been found in Archaea. See also Termination factor Mutation Frequency Decline (Mfd) protein is also capable of dissociating RNA polymerase from the DNA template References External links Bacterial proteins Escherichia coli Gene expression Helicases
Rho factor
Chemistry,Biology
795
39,484,358
https://en.wikipedia.org/wiki/ParABS%20system
The parABS system is a broadly conserved molecular mechanism for plasmid partitioning and chromosome segregation in bacteria. Originally identified as a genetic element required for faithful partitioning of low-copy-number plasmids, it consists of three components: the ParA ATPase, the ParB DNA-binding protein, and the cis-acting parS sequence. The parA and parB genes are typically found in the same operon, with parS elements located within or adjacent to this operon. Collectively, these components function to ensure accurate partitioning of plasmids or whole chromosomes between bacterial daughter cells prior to cell division. Mechanism Based on chromatin immunoprecipitation (ChIP) experiments, ParB has the ability to bind not only to high-affinity parS sites but also to adjacent nonspecific DNA, a behavior known as "spreading". The ParB-DNA complex is thought to be translocated by a Brownian ratchet mechanism involving the ParA ATPase: ParA binds DNA nonspecifically in its ATP-bound state but much more weakly in its ADP-bound state. The ParB-DNA complex binds to ATP-bound ParA, stimulating its ATPase activity and its dissociation from DNA. In this way, the ParB-DNA complex can be translocated by chasing a receding wave. This translocation mechanism has been observed by fluorescence microscopy both in vivo and more recently in vitro with purified components. References Plasmids
ParABS system
Biology
312
34,624,069
https://en.wikipedia.org/wiki/Siconos
SICONOS is an open source scientific software primarily targeted at modeling and simulating non-smooth dynamical systems (NSDS): Mechanical systems (Rigid body or solid) with Unilateral contact and Coulomb friction as we find in Non-smooth mechanics, Contact dynamics or Granular material. Switched Electrical Circuit such as Power converter, Rectifier, Phase-locked loop (PLL) or Analog-to-digital converter Sliding mode control systems Other applications are found in Systems and Control (hybrid systems, differential inclusions, optimal control with state constraints), Optimization (Complementarity problem and Variational inequality) Biology Gene regulatory network, Fluid Mechanics and Computer graphics, etc. Components The software is based on 3 main components Siconos/Numerics (C API). Collection of low-level algorithms for solving basic Algebra and optimization problems arising in the simulation of nonsmooth dynamical systems Linear complementarity problem (LCP) Mixed linear complementarity problem (MLCP) Nonlinear complementarity problem (NCP) Quadratic programming problems (QP) Friction-contact problems (2D or 3D) (Second-order cone programming (SOCP)) Primal or Dual Relay problems Siconos/Kernel. API C++ that allows one to model and simulate the nonsmooth dynamical systems. It contains Dynamical systems classes : first order one, Lagrangian systems, Newton-Euler systems Nonsmooth laws : complementarity, Relay, Friction, Contact, impact Siconos/Front-end (API Python) Mainly an auto-generated SWIG interface of the API C++ which a special support for data structure. Performance According to peer reviewed studies published by its developers, Siconos was approximately five times faster than Ngspice or ELDO (a commercial SPICE by Mentor Graphics) and 250 times faster than PLECS when solving a buck converter. See also (an extension of the notion of differential equation) on which much of the NSDS theory relies , which affects ODEs/DAEs for functions with "sharp turns" and which affects numerical convergence References External links The official Siconos site other related publications Free science software Free software programmed in C Free software programmed in C++ Software using the Apache license Cross-platform free software Free software for Linux Free software for Windows Free software for macOS Dynamical systems Scientific simulation software
Siconos
Physics,Mathematics
493
54,575,571
https://en.wikipedia.org/wiki/Explainable%20artificial%20intelligence
Explainable AI (XAI), often overlapping with interpretable AI, or explainable machine learning (XML), is a field of research within artificial intelligence (AI) that explores methods that provide humans with the ability of intellectual oversight over AI algorithms. The main focus is on the reasoning behind the decisions or predictions made by the AI algorithms, to make them more understandable and transparent. This addresses users' requirement to assess safety and scrutinize the automated decision making in applications. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision. XAI hopes to help users of AI-powered systems perform more effectively by improving their understanding of how those systems reason. XAI may be an implementation of the social right to explanation. Even if there is no such legal right or regulatory requirement, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. XAI aims to explain what has been done, what is being done, and what will be done next, and to unveil which information these actions are based on. This makes it possible to confirm existing knowledge, challenge existing knowledge, and generate new assumptions. Machine learning (ML) algorithms used in AI can be categorized as white-box or black-box. White-box models provide results that are understandable to experts in the domain. Black-box models, on the other hand, are extremely hard to explain and may not be understood even by domain experts. XAI algorithms follow the three principles of transparency, interpretability, and explainability. A model is transparent "if the processes that extract model parameters from training data and generate labels from testing data can be described and motivated by the approach designer." Interpretability describes the possibility of comprehending the ML model and presenting the underlying basis for decision-making in a way that is understandable to humans. Explainability is a concept that is recognized as important, but a consensus definition is not yet available; one possibility is "the collection of features of the interpretable domain that have contributed, for a given example, to producing a decision (e.g., classification or regression)". In summary, Interpretability refers to the user's ability to understand model outputs, while Model Transparency includes Simulatability (reproducibility of predictions), Decomposability (intuitive explanations for parameters), and Algorithmic Transparency (explaining how algorithms work). Model Functionality focuses on textual descriptions, visualization, and local explanations, which clarify specific outputs or instances rather than entire models. All these concepts aim to enhance the comprehensibility and usability of AI systems. If algorithms fulfill these principles, they provide a basis for justifying decisions, tracking them and thereby verifying them, improving the algorithms, and exploring new facts. Sometimes it is also possible to achieve a high-accuracy result with white-box ML algorithms. These algorithms have an interpretable structure that can be used to explain predictions. Concept Bottleneck Models, which use concept-level abstractions to explain model reasoning, are examples of this and can be applied in both image and text prediction tasks. This is especially important in domains like medicine, defense, finance, and law, where it is crucial to understand decisions and build trust in the algorithms. Many researchers argue that, at least for supervised machine learning, the way forward is symbolic regression, where the algorithm searches the space of mathematical expressions to find the model that best fits a given dataset. AI systems optimize behavior to satisfy a mathematically specified goal system chosen by the system designers, such as the command "maximize the accuracy of assessing how positive film reviews are in the test dataset." The AI may learn useful general rules from the test set, such as "reviews containing the word "horrible" are likely to be negative." However, it may also learn inappropriate rules, such as "reviews containing 'Daniel Day-Lewis' are usually positive"; such rules may be undesirable if they are likely to fail to generalize outside the training set, or if people consider the rule to be "cheating" or "unfair." A human can audit rules in an XAI to get an idea of how likely the system is to generalize to future real-world data outside the test set. Goals Cooperation between agents – in this case, algorithms and humans – depends on trust. If humans are to accept algorithmic prescriptions, they need to trust them. Incompleteness in formal trust criteria is a barrier to optimization. Transparency, interpretability, and explainability are intermediate goals on the road to these more comprehensive trust criteria. This is particularly relevant in medicine, especially with clinical decision support systems (CDSS), in which medical professionals should be able to understand how and why a machine-based decision was made in order to trust the decision and augment their decision-making process. AI systems sometimes learn undesirable tricks that do an optimal job of satisfying explicit pre-programmed goals on the training data but do not reflect the more nuanced implicit desires of the human system designers or the full complexity of the domain data. For example, a 2017 system tasked with image recognition learned to "cheat" by looking for a copyright tag that happened to be associated with horse pictures rather than learning how to tell if a horse was actually pictured. In another 2017 system, a supervised learning AI tasked with grasping items in a virtual world learned to cheat by placing its manipulator between the object and the viewer in a way such that it falsely appeared to be grasping the object. One transparency project, the DARPA XAI program, aims to produce "glass box" models that are explainable to a "human-in-the-loop" without greatly sacrificing AI performance. Human users of such a system can understand the AI's cognition (both in real-time and after the fact) and can determine whether to trust the AI. Other applications of XAI are knowledge extraction from black-box models and model comparisons. In the context of monitoring systems for ethical and socio-legal compliance, the term "glass box" is commonly used to refer to tools that track the inputs and outputs of the system in question, and provide value-based explanations for their behavior. These tools aim to ensure that the system operates in accordance with ethical and legal standards, and that its decision-making processes are transparent and accountable. The term "glass box" is often used in contrast to "black box" systems, which lack transparency and can be more difficult to monitor and regulate. The term is also used to name a voice assistant that produces counterfactual statements as explanations. Explainability and interpretability techniques There is a subtle difference between the terms explainability and interpretability in the context of AI. Some explainability techniques don't involve understanding how the model works, and may work across various AI systems. Treating the model as a black box and analyzing how marginal changes to the inputs affect the result sometimes provides a sufficient explanation. Explainability Explainability is useful for ensuring that AI models are not making decisions based on irrelevant or otherwise unfair criteria. For classification and regression models, several popular techniques exist: Partial dependency plots show the marginal effect of an input feature on the predicted outcome. SHAP (SHapley Additive exPlanations) enables visualization of the contribution of each input feature to the output. It works by calculating Shapley values, which measure the average marginal contribution of a feature across all possible combinations of features. Feature importance estimates how important a feature is for the model. It is usually done using permutation importance, which measures the performance decrease when it the feature value randomly shuffled across all samples. LIME approximates locally a model's outputs with a simpler, interpretable model. Multitask learning provides a large number of outputs in addition to the target classification. These other outputs can help developers deduce what the network has learned. For images, saliency maps highlight the parts of an image that most influenced the result. Systems that are expert or knowledge based are software systems that are made my experts. This system consists of a knowledge based encoding for the domain knowledge. This system is usually modeled as production rules, and someone uses this knowledge base which the user can question the system for knowledge. In expert systems, the language and explanations are understood with an explanation for the reasoning or a problem solving activity. However, these techniques are not very suitable for language models like generative pretrained transformers. Since these models generate language, they can provide an explanation, but which may not be reliable. Other techniques include attention analysis (examining how the model focuses on different parts of the input), probing methods (testing what information is captured in the model's representations), causal tracing (tracing the flow of information through the model) and circuit discovery (identifying specific subnetworks responsible for certain behaviors). Explainability research in this area overlaps significantly with interpretability and alignment research. Interpretability Scholars sometimes use the term "mechanistic interpretability" to refer to the process of reverse-engineering artificial neural networks to understand their internal decision-making mechanisms and components, similar to how one might analyze a complex machine or computer program. Interpretability research often focuses on generative pretrained transformers. It is particularly relevant for AI safety and alignment, as it may enable to identify signs of undesired behaviors such as sycophancy, deceptiveness or bias, and to better steer AI models. Studying the interpretability of the most advanced foundation models often involves searching for an automated way to identify "features" in generative pretrained transformers. In a neural network, a feature is a pattern of neuron activations that corresponds to a concept. A compute-intensive technique called "dictionary learning" makes it possible to identify features to some degree. Enhancing the ability to identify and edit features is expected to significantly improve the safety of frontier AI models. For convolutional neural networks, DeepDream can generate images that strongly activate a particular neuron, providing a visual hint about what the neuron is trained to identify. History and methods During the 1970s to 1990s, symbolic reasoning systems, such as MYCIN, GUIDON, SOPHIE, and PROTOS could represent, reason about, and explain their reasoning for diagnostic, instructional, or machine-learning (explanation-based learning) purposes. MYCIN, developed in the early 1970s as a research prototype for diagnosing bacteremia infections of the bloodstream, could explain which of its hand-coded rules contributed to a diagnosis in a specific case. Research in intelligent tutoring systems resulted in developing systems such as SOPHIE that could act as an "articulate expert", explaining problem-solving strategy at a level the student could understand, so they would know what action to take next. For instance, SOPHIE could explain the qualitative reasoning behind its electronics troubleshooting, even though it ultimately relied on the SPICE circuit simulator. Similarly, GUIDON added tutorial rules to supplement MYCIN's domain-level rules so it could explain the strategy for medical diagnosis. Symbolic approaches to machine learning relying on explanation-based learning, such as PROTOS, made use of explicit representations of explanations expressed in a dedicated explanation language, both to explain their actions and to acquire new knowledge. In the 1980s through the early 1990s, truth maintenance systems (TMS) extended the capabilities of causal-reasoning, rule-based, and logic-based inference systems. A TMS explicitly tracks alternate lines of reasoning, justifications for conclusions, and lines of reasoning that lead to contradictions, allowing future reasoning to avoid these dead ends. To provide an explanation, they trace reasoning from conclusions to assumptions through rule operations or logical inferences, allowing explanations to be generated from the reasoning traces. As an example, consider a rule-based problem solver with just a few rules about Socrates that concludes he has died from poison: By the 1990s researchers began studying whether it is possible to meaningfully extract the non-hand-coded rules being generated by opaque trained neural networks. Researchers in clinical expert systems creating neural network-powered decision support for clinicians sought to develop dynamic explanations that allow these technologies to be more trusted and trustworthy in practice. In the 2010s public concerns about racial and other bias in the use of AI for criminal sentencing decisions and findings of creditworthiness may have led to increased demand for transparent artificial intelligence. As a result, many academics and organizations are developing tools to help detect bias in their systems. Marvin Minsky et al. raised the issue that AI can function as a form of surveillance, with the biases inherent in surveillance, suggesting HI (Humanistic Intelligence) as a way to create a more fair and balanced "human-in-the-loop" AI. Explainable AI has been recently a new topic researched amongst the context of modern deep learning. Modern complex AI techniques, such as deep learning, are naturally opaque. To address this issue, methods have been developed to make new models more explainable and interpretable. This includes layerwise relevance propagation (LRP), a technique for determining which features in a particular input vector contribute most strongly to a neural network's output. Other techniques explain some particular prediction made by a (nonlinear) black-box model, a goal referred to as "local interpretability". We still today cannot explain the output of today's DNNs without the new explanatory mechanisms, we also can't by the neural network, or external explanatory components There is also research on whether the concepts of local interpretability can be applied to a remote context, where a model is operated by a third-party. There has been work on making glass-box models which are more transparent to inspection. This includes decision trees, Bayesian networks, sparse linear models, and more. The Association for Computing Machinery Conference on Fairness, Accountability, and Transparency (ACM FAccT) was established in 2018 to study transparency and explainability in the context of socio-technical systems, many of which include artificial intelligence. Some techniques allow visualisations of the inputs to which individual software neurons respond to most strongly. Several groups found that neurons can be aggregated into circuits that perform human-comprehensible functions, some of which reliably arise across different networks trained independently. There are various techniques to extract compressed representations of the features of given inputs, which can then be analysed by standard clustering techniques. Alternatively, networks can be trained to output linguistic explanations of their behaviour, which are then directly human-interpretable. Model behaviour can also be explained with reference to training data—for example, by evaluating which training inputs influenced a given behaviour the most. The use of explainable artificial intelligence (XAI) in pain research, specifically in understanding the role of electrodermal activity for automated pain recognition: hand-crafted features and deep learning models in pain recognition, highlighting the insights that simple hand-crafted features can yield comparative performances to deep learning models and that both traditional feature engineering and deep feature learning approaches rely on simple characteristics of the input time-series data. Regulation As regulators, official bodies, and general users come to depend on AI-based dynamic systems, clearer accountability will be required for automated decision-making processes to ensure trust and transparency. The first global conference exclusively dedicated to this emerging discipline was the 2017 International Joint Conference on Artificial Intelligence: Workshop on Explainable Artificial Intelligence (XAI). It has evolved over the years, with various workshops organised and co-located to many other international conferences, and it has now a dedicated global event, "The world conference on eXplainable Artificial Intelligence", with its own proceedings. The European Union introduced a right to explanation in the General Data Protection Regulation (GDPR) to address potential problems stemming from the rising importance of algorithms. The implementation of the regulation began in 2018. However, the right to explanation in GDPR covers only the local aspect of interpretability. In the United States, insurance companies are required to be able to explain their rate and coverage decisions. In France the Loi pour une République numérique (Digital Republic Act) grants subjects the right to request and receive information pertaining to the implementation of algorithms that process data about them. Limitations Despite ongoing endeavors to enhance the explainability of AI models, they persist with several inherent limitations. Adversarial parties By making an AI system more explainable, we also reveal more of its inner workings. For example, the explainability method of feature importance identifies features or variables that are most important in determining the model's output, while the influential samples method identifies the training samples that are most influential in determining the output, given a particular input. Adversarial parties could take advantage of this knowledge. For example, competitor firms could replicate aspects of the original AI system in their own product, thus reducing competitive advantage. An explainable AI system is also susceptible to being “gamed”—influenced in a way that undermines its intended purpose. One study gives the example of a predictive policing system; in this case, those who could potentially “game” the system are the criminals subject to the system's decisions. In this study, developers of the system discussed the issue of criminal gangs looking to illegally obtain passports, and they expressed concerns that, if given an idea of what factors might trigger an alert in the passport application process, those gangs would be able to “send guinea pigs” to test those triggers, eventually finding a loophole that would allow them to “reliably get passports from under the noses of the authorities”. Adaptive integration and explanation Many approaches that it uses provides explanation in general, it doesn't take account for the diverse backgrounds and knowledge level of the users. This leads to challenges with accurate comprehension for all users. Expert users can find the explanations lacking in depth, and are oversimplified, while a beginner user may struggle understanding the explanations as they are complex. This limitation downplays the ability of the XAI techniques to appeal to their users with different levels of knowledge, which can impact the trust from users and who uses it. The quality of explanations can be different amongst their users as they all have different expertise levels, including different situation and conditions Technical complexity A fundamental barrier to making AI systems explainable is the technical complexity of such systems. End users often lack the coding knowledge required to understand software of any kind. Current methods used to explain AI are mainly technical ones, geared toward machine learning engineers for debugging purposes, rather than toward the end users who are ultimately affected by the system, causing “a gap between explainability in practice and the goal of transparency”. Proposed solutions to address the issue of technical complexity include either promoting the coding education of the general public so technical explanations would be more accessible to end users, or providing explanations in layperson terms. The solution must avoid oversimplification. It is important to strike a balance between accuracy – how faithfully the explanation reflects the process of the AI system – and explainability – how well end users understand the process. This is a difficult balance to strike, since the complexity of machine learning makes it difficult for even ML engineers to fully understand, let alone non-experts. Understanding versus trust The goal of explainability to end users of AI systems is to increase trust in the systems, even “address concerns about lack of ‘fairness’ and discriminatory effects”. However, even with a good understanding of an AI system, end users may not necessarily trust the system. In one study, participants were presented with combinations of white-box and black-box explanations, and static and interactive explanations of AI systems. While these explanations served to increase both their self-reported and objective understanding, it had no impact on their level of trust, which remained skeptical. This outcome was especially true for decisions that impacted the end user in a significant way, such as graduate school admissions. Participants judged algorithms to be too inflexible and unforgiving in comparison to human decision-makers; instead of rigidly adhering to a set of rules, humans are able to consider exceptional cases as well as appeals to their initial decision. For such decisions, explainability will not necessarily cause end users to accept the use of decision-making algorithms. We will need to either turn to another method to increase trust and acceptance of decision-making algorithms, or question the need to rely solely on AI for such impactful decisions in the first place. However, some emphasize that the purpose of explainability of artificial intelligence is not to merely increase users' trust in the system's decisions, but to calibrate the users' level of trust to the correct level. According to this principle, too much or too little user trust in the AI system will harm the overall performance of the human-system unit. When the trust is excessive, the users are not critical of possible mistakes of the system and when the users do not have enough trust in the system, they will not exhaust the benefits inherent in it. Criticism Some scholars have suggested that explainability in AI should be considered a goal secondary to AI effectiveness, and that encouraging the exclusive development of XAI may limit the functionality of AI more broadly. Critiques of XAI rely on developed concepts of mechanistic and empiric reasoning from evidence-based medicine to suggest that AI technologies can be clinically validated even when their function cannot be understood by their operators. Some researchers advocate the use of inherently interpretable machine learning models, rather than using post-hoc explanations in which a second model is created to explain the first. This is partly because post-hoc models increase the complexity in a decision pathway and partly because it is often unclear how faithfully a post-hoc explanation can mimic the computations of an entirely separate model. However, another view is that what is important is that the explanation accomplishes the given task at hand, and whether it is pre or post-hoc doesn't matter. If a post-hoc explanation method helps a doctor diagnose cancer better, it is of secondary importance whether it is a correct/incorrect explanation. The goals of XAI amount to a form of lossy compression that will become less effective as AI models grow in their number of parameters. Along with other factors this leads to a theoretical limit for explainability. Explainability in social choice Explainability was studied also in social choice theory. Social choice theory aims at finding solutions to social decision problems, that are based on well-established axioms. Ariel D. Procaccia explains that these axioms can be used to construct convincing explanations to the solutions. This principle has been used to construct explanations in various subfields of social choice. Voting Cailloux and Endriss present a method for explaining voting rules using the axioms that characterize them. They exemplify their method on the Borda voting rule . Peters, Procaccia, Psomas and Zhou present an algorithm for explaining the outcomes of the Borda rule using O(m2) explanations, and prove that this is tight in the worst case. Participatory budgeting Yang, Hausladen, Peters, Pournaras, Fricker and Helbing present an empirical study of explainability in participatory budgeting. They compared the greedy and the equal shares rules, and three types of explanations: mechanism explanation (a general explanation of how the aggregation rule works given the voting input), individual explanation (explaining how many voters had at least one approved project, at least 10000 CHF in approved projects), and group explanation (explaining how the budget is distributed among the districts and topics). They compared the perceived trustworthiness and fairness of greedy and equal shares, before and after the explanations. They found out that, for MES, mechanism explanation yields the highest increase in perceived fairness and trustworthiness; the second-highest was Group explanation. For Greedy, Mechanism explanation increases perceived trustworthiness but not fairness, whereas Individual explanation increases both perceived fairness and trustworthiness. Group explanation decreases the perceived fairness and trustworthiness. Payoff allocation Nizri, Azaria and Hazon present an algorithm for computing explanations for the Shapley value. Given a coalitional game, their algorithm decomposes it to sub-games, for which it is easy to generate verbal explanations based on the axioms characterizing the Shapley value. The payoff allocation for each sub-game is perceived as fair, so the Shapley-based payoff allocation for the given game should seem fair as well. An experiment with 210 human subjects shows that, with their automatically generated explanations, subjects perceive Shapley-based payoff allocation as significantly fairer than with a general standard explanation. See also References External links Quality control tools Artificial intelligence Artificial intelligence engineering
Explainable artificial intelligence
Engineering
5,054
594,043
https://en.wikipedia.org/wiki/Barrel%20%28unit%29
A barrel is one of several units of volume applied in various contexts; there are dry barrels, fluid barrels (such as the U.K. beer barrel and U.S. beer barrel), oil barrels, and so forth. For historical reasons the volumes of some barrel units are roughly double the volumes of others; volumes in common use range approximately from . In many connections the term is used almost interchangeably with barrel. Since medieval times the term as a unit of measure has had various meanings throughout Europe, ranging from about 100 litres to about 1,000 litres. The name was derived in medieval times from the French , of unknown origin, but still in use, both in French and as derivations in many other languages such as Italian, Polish, and Spanish. In most countries such usage is obsolescent, increasingly superseded by SI units. As a result, the meaning of corresponding words and related concepts (vat, cask, keg etc.) in other languages often refers to a physical container rather than a known measure. In the international oil market context, however, prices in United States dollars per barrel are commonly used, and the term is variously translated, often to derivations of the Latin / Teutonic root fat (for example vat or Fass). In other commercial connections, barrel sizes such as beer keg volumes also are standardised in many countries. Dry goods in the US US dry barrel: Defined as length of stave , diameter of head , distance between heads , circumference of bulge outside measurement; representing as nearly as possible 7,056 cubic inches; and the thickness of staves not greater than (diameter ≈ ). Any barrel that is 7,056 cubic inches is recognized as equivalent. This is exactly equal to . US barrel for cranberries Defined as length of stave , diameter of head , distance between heads , circumference of bulge outside measurement; and the thickness of staves not greater than (diameter ≈ ). No equivalent in cubic inches is given in the statute, but later regulations specify it as 5,826 cubic inches. Some products have a standard weight or volume that constitutes a barrel: Cornmeal, Cement (including Portland cement), or Sugar, Wheat or rye flour, or Lime (mineral), large barrel, or small barrel Butter and cheese in UK, Salt, Fluid barrel in the US and UK Fluid barrels vary depending on what is being measured and where. In the UK a beer barrel is . In the US most fluid barrels (apart from oil) are (half a hogshead), but a beer barrel is . The size of beer kegs in the US is based loosely on fractions of the US beer barrel. When referring to beer barrels or kegs in many countries, the term may be used for the commercial package units independent of actual volume, where common range for professional use is 20–60 L, typically a DIN or Euro keg of 50 L. History Richard III, King of England from 1483 until 1485, had defined the wine puncheon as a cask holding 84 wine gallons and a wine tierce as holding 42 wine gallons. Custom had made the 42 gallon watertight tierce a standard container for shipping eel, salmon, herring, molasses, wine, whale oil, and many other commodities in the English colonies by 1700. After the American Revolution in 1776, American merchants continued to use the same size barrels. Oil barrel Definitions In the oil industry, one barrel (unit symbol bbl) is a unit of volume used for measuring oil defined as exactly 42 US gallons, approximately 159 liters, or . According to the American Petroleum Institute (API), a standard barrel of oil is the amount of oil that would occupy a volume of exactly at reference temperature and pressure conditions of and (or 1 atm). This standard barrel of oil will occupy a different volume at different pressures and temperatures. A standard barrel in this context is thus not simply a measure of volume, but of volume under specific conditions. () Unit multiples Oil companies that are publicly listed in the United States typically report their production using the unit multiples Mbbl (one thousand barrels) and MMbbl (one million barrels), derived from the Latin word "mille" and Roman Numeral M, meaning "thousand". Due to the risk of confusion The Society of Petroleum Engineers recommends in their style guide that abbreviations or prefixes M or MM are not used for barrels of oil or barrel of oil equivalent, but rather that thousands, millions or billions are spelled out. Using M for thousand and MM for million are in conflict with the SI convention where the "M" prefix stands for "mega" representing million, from the Greek for "large". Some oil companies, particularly those based in Europe, use kb (kilobarrels, one thousand barrels), mb (megabarrels, one million barrels), and gb (gigabarrels, one billion barrels). The lower case m is used to avoid confusion with the capital M used for thousand. For the same reason, the unit kbbl (one thousand barrels) is also sometimes used. Etymology The first "b" in "bbl" may have been doubled originally to indicate the plural (1 bl, 2 bbl), or possibly it was doubled to eliminate any confusion with bl as a symbol for the bale. Some sources assert that "bbl" originated as a symbol for "blue barrels" delivered by Standard Oil in its early days. However, while Ida Tarbell's 1904 Standard Oil Company history acknowledged the "holy blue barrel", the abbreviation "bbl" had been in use well before the 1859 birth of the U.S. petroleum industry. Flow rate Oil wells recover not just oil from the ground, but also natural gas and water. The term barrels of liquid per day (BLPD) refers to the total volume of liquid that is recovered. Similarly, barrels of oil equivalent or BOE is a value that accounts for both oil and natural gas while ignoring any water that is recovered. Other terms are used when discussing only oil. These terms can refer to either the production of crude oil at an oil well, the conversion of crude oil to other products at an oil refinery, or the overall consumption of oil by a region or country. One common term is barrels per day (BPD, BOPD, bbl/d, bpd, bd, or b/d), where 1 BPD is equivalent to 0.0292 gallons per minute. One BPD also becomes 49.8 tonnes per year. At an oil refinery, production is sometimes reported as barrels per calendar day (b/cd or bcd), which is total production in a year divided by the days in that year. Likewise, barrels per stream day (BSD or BPSD) is the quantity of oil product produced by a single refining unit during continuous operation for 24 hours. Burning one tonne of light, synthetic, or heavy crude yields 38.51, 39.40, or 40.90 GJ (thermal) respectively (10.70, 10.94, or 11.36 MW·h), so 1 tonne per day of synthetic crude is about 456 kW of thermal power and 1 bpd of synthetic crude is about 378 kW (slightly less for light crude, slightly more for heavy crude). Conversion The task of converting a standard barrel of oil to a standard cubic metre of oil is complicated by the fact that the latter is defined by the API to mean the amount of oil that, at different reference conditions (101.325 kPa and ), occupies 1 cubic metre. The fact that the refence conditions are not exactly the same means that an exact conversion is impossible unless the exact expansion coefficient of the crude is known, and this will vary from one crude oil to another. For a light oil with density of 850 kilogram per cubic metre (API gravity of 35), warming the oil from to might increase its volume by about 0.047%. Conversely, a heavy oil with a density of 934 kg/m3 (API gravity of 20) might only increase in volume by 0.039%. If physically measuring the density at a new temperature is not possible, then tables of empirical data can be used to accurately predict the change in density. In turn, this allows maximum accuracy when converting between standard barrel and standard cubic metre. The logic above also implies the same level of accuracy in measurements for barrels if there is a error in measuring the temperature at time of measuring the volume. For ease of trading, communication and financial accounting, international commodity exchanges often set a conversion factor for benchmark crude oils. For instance the conversion factor set by the New York Mercantile Exchange (NYMEX) for Western Canadian Select (WCS) crude oil traded at Hardisty, Alberta, Canada is 6.29287 U.S. barrels per standard cubic metre, despite the uncertainty in converting the volume for crude oil. Regulatory authorities in producing countries set standards for measurement accuracy of produced hydrocarbons, where such measurements affect taxes or royalties to the government. In the United Kingdom, for instance, the measurement accuracy required is ±0.25%. Qualifiers A barrel can technically be used to specify any volume. Since the actual nature of the fluids being measured varies along the stream, sometimes qualifiers are used to clarify what is being specified. In the oil field, it is often important to differentiate between rates of production of fluids, which may be a mix of oil and water, and rates of production of the oil itself. If a well is producing 10 MBD (millions of barrels per day) of fluids with a 20% water cut, then the well would also be said to be producing 8,000 barrels of oil a day (bod). In other circumstances, it can be important to include gas in production and consumption figures. Normally, gas amount is measured in standard cubic feet or standard cubic metres (for volume at STP), as well as in kg or Btu (which do not depend on pressure or temperature). But when necessary, such volume is converted to a volume of oil of equivalent enthalpy of combustion. Production and consumption using this analogue is stated in barrels of oil equivalent per day (boed). In the case of water-injection wells, in the United States it is common to refer to the injectivity rate in barrels of water per day (bwd). In Canada, it is measured in cubic metres per day (m3/d). In general, water injection rates will be stated in the same units as oil production rates, since the usual objective is to replace the volume of oil produced with a similar volume of water to maintain reservoir pressure. Related kinds of quantity Outside the United States, volumes of oil are usually reported in cubic metres (m3) instead of oil barrels. Cubic metre is the basic volume unit in the International System. In Canada, oil companies measure oil in cubic metres, but convert to barrels on export, since most of Canada's oil production is exported to the US. The nominal conversion factor is 1 cubic metre = 6.2898 oil barrels, but conversion is generally done by custody transfer meters on the border, since the volumes are specified at different temperatures, and the exact conversion factor depends on both density and temperature. Canadian companies operate internally and report to Canadian governments in cubic metres, but often convert to US barrels for the benefit of American investors and oil marketers. They generally quote prices in Canadian dollars per cubic metre to other Canadian companies, but use US dollars per barrel in financial reports and press statements, making it appear to the outside world that they operate in barrels. Companies on the European stock exchanges report the mass of oil in tonnes. Since different varieties of petroleum have different densities, however, there is not a single conversion between mass and volume. For example, one tonne of heavy distillates might occupy a volume of . In contrast, one tonne of crude oil might occupy , and one tonne of gasoline will require . Overall, the conversion is usually between per tonne. History The measurement of an "oil barrel" originated in the early Pennsylvania oil fields. The Drake Well, the first oil well in the US, was drilled in Pennsylvania in 1859, and an oil boom followed in the 1860s. When oil production began, there was no standard container for oil, so oil and petroleum products were stored and transported in barrels of different shapes and sizes. Some of these barrels would originally have been used for other products, such as beer, fish, molasses, or turpentine. Both the barrels (based on the old English wine measure), the tierce (159 litres) and the whiskey barrels were used. Also, barrels were in common use. The 40 gallon whiskey barrel was the most common size used by early oil producers, since they were readily available at the time. Around 1866, early oil producers in Pennsylvania concluded that shipping oil in a variety of different containers was causing buyer distrust. They decided they needed a standard unit of measure to convince buyers that they were getting a fair volume for their money, and settled on the standard wine tierce, which was two gallons larger than the standard whisky barrel. The Weekly Register, an Oil City, Pennsylvania newspaper, stated on August 31, 1866 that "the oil producers have issued the following circular": And by that means, King Richard III's English wine tierce became the American standard oil barrel. By 1872, the standard oil barrel was firmly established as 42 US-gallons. The 42 gallon standard oil barrel was officially adopted by the Petroleum Producers Association in 1872 and by the U.S. Geological Survey and the U.S. Bureau of Mines in 1882. In modern times, many different types of oil, chemicals, and other products are transported in steel drums. In the United States, these commonly have a capacity of and are referred to as such. They are called 200 litre or 200 kg drums outside the United States. In the United Kingdom and its former dependencies, a drum was used, even though all those countries now officially use the metric system and the drums are filled to 200 litres. In the United States, the 42 US-gallon size as a unit of measure is largely confined to the oil industry, while different sizes of barrel are used in other industries. Nearly all other countries use the metric system. Thus, the 42 US-gallon oil barrel is a unit of measure rather than a physical container used to transport crude oil. See also 55 gallon drum Barrel Barrel of oil equivalent English brewery cask units English wine cask units Imperial units List of unusual units of measurement Petroleum Petroleum pricing around the world Standard Barrel Act For Fruits, Vegetables, and Dry Commodities United States customary units Units of measurement Notes References Brewing Customary units of measurement in the United States Imperial units Petroleum Units of volume Alcohol measurement
Barrel (unit)
Chemistry,Mathematics
3,044
75,127,626
https://en.wikipedia.org/wiki/Your%20Favourite%20London%20Sounds
Your Favourite London Sounds is an album compiled by English musician Peter Cusack and released in November 2001 by the London Musicians Collective (LMC). It collects 40 field recordings of sounds around the English city of London, most of which were recorded by Cusack. The project originated when the LMC hosted a temporary radio station for the 1998 Meltdown Festival, which Cusack used to ask festival goers and listeners what their favourite 'London sound' was. He received hundreds of responses, many of which were varied and often personal. The sounds on the recording are highly diverse and vary between outdoor and indoor sounds, some of which are famous and some of which are more atypical. Several sounds are specific to London while others are broader. On release, the album received critical acclaim, including being named the week's best CD by The Guardian, and inspired radio and newspaper commentary. Cusack commented that it received more attention than his musical work. Ultimately, the London project was the first in Cusack's larger Favourite Sounds Project, which visited other cities across the world. Background and recording As part of John Peel's Meltdown Festival in June 1998, the London Musicians Collective (LMC) launched and ran a temporary radio station, Resonance 107.3 FM, over four weeks. It was the first ever London station dedicated to radio art, and later evolved into the artist-run Resonance FM. The LMC and particularly the member Peter Cusack, an improvisational musician, used Resonance 107.3 FM as an opportunity to undertake research, asking festival goers and listeners to send in or tell of their "favourite London sound". This formed the basis of a programme, London Soundscape. Cusack received hundreds of responses, and then travelled around London to capture all the relevant sounds himself, providing hours of raw material. There was no overwhelming "favourite London sound" to emerge from the replies, with few people offering the same answer as anyone else. Many of the selections were surprisingly personal, and varied between outdoor and indoor sounds (such as post hitting a doormat). Cusack was surprised by how considered and largely serious the answers were, as well as how detailed and specific they could be. He found this "especially encouraging", saying: "It has been said in soundscape circles that because of ever increasing noise we are losing the ability to hear. I think this is nonsense. We may find it pretty difficult to talk or think about sound but we certainly hear it, including the details within all the noise." He expressed surprise at minor details that participants often included in their answers, ones which "may not be sonically apparent but which for them were important." He believed such sounds slowly gain personal significance to those who "travel the same route everyday". The sound of Big Ben was the most popular choice, while some surprising picks included arcade machines and traffic, but a large number of responses offered a collection of sounds rather than an individual one. Realised in 2001, Your Favourite London Sounds is a CD based on responses from the questionnaire, featuring 40 examples of London sounds given as choices. Some of the recordings are the same as those which debuted on the 1998 radio station. Cusack compiled and edited the disc and recorded 35 of its tracks; the others were recorded by Matthias Krispert ("Brixton Station"), Tom Wallace ("Bus Pressure"), Bunny Schendler ("Euston Main Line Railway Station"), Clive Bell ("Tottenham Hotspurs Football Club, White Hart Lane") and Syngen Brown ("LRT Transformer, Putney"). Contents According to The Los Angeles Times writer Jill Lawless, Your Favourite London Sounds is an aural collage of London's distinctive soundscape, one which "has inspired Londoners to close their eyes and listen to their city." Author David M. Frohlich writes the project demonstrates that the favourite sounds of London are "highly idiosyncratic, and just as likely to include man-made sounds as natural sounds", while Lawless said the release "confirms Londoners' intense and idiosyncratic relationship with the urban soundscape." The subjects are diverse, ranging from frying onions, "rain on skylight while lying in bed", "the call to prayer from an east London mosque", double-decker buses, coffee makers, a voicemail message, a bicycle crossing a canal towpath, a hissing bus door, a noisy street market, birds, traffic, taxis, trains, geese, wailing sirens, humming power plants, lapping rivers and "electronic bleeps at supermarket checkouts." Kenneth Goldsmith opines that the project provides "an odd way to think about a city", while according to John L. Walters, the release is "not that outlandish" as many of Cusack's prior albums, including Where Is the Green Parrot? (1999), similarly include lengthy field recordings. Some sounds are specific to London, such as the London Underground sounds (such as the "mind the gap" announcement) and the bell on the 73 Bus. Cusack said many people "mentioned bus sounds – but not just any bus. It had to be the No. 73 bus, or the No. 12 bus. It was much more personal than I was expecting." He believed that different parts of the world sound very different to each other one part of the world, adding: "On the London Underground, the way the 'mind the gap' echoes down the tunnel comes to you in such a London way, you can't fail to know where you are when you hear it." As Goldsmith describes, some sounds captured in language-laden locations (such as coffee shops and markets) catch locals in conversation, bringing "a specific local flavour to the tracks", while other sounds are not often associated with cities, including rolling thunder and "the unaccompanied chirping of birds". Several sounds are presented with brief descriptions, which Goldsmith says gives them "more poetic weight". Lawless says that in addition to "predictable natural sounds," such as a fountain, blackbirds, and rising and falling barges moored in the Thames, there are less predictable sounds and several "extremely delicate and specific" ones, such as bicycle wheels riding over "loose concrete slabs" on a specific towpath, while others are more generic, including turnstiles moving on entry to a football game. She also notes the inclusion of several endangered sounds, such as "the slamming of old-fashioned train doors, some of the few not yet replaced by mechanical sliding doors." According to David Toop, sounds vary from famous (Big Ben and "mind the gap"), social (a club queue and Dalston Market), highly personal (a phone message), "universally shared soundmarks" ("post through letterbox", "key in door") and unusual "ear-of-the-musician" answers. Toop also characterises some sounds as possessing "a distinct air of cinema futurism", citing the "disembodied announcements echoing in public space, polyglot languages overhead on the transport system, impersonal reminders of heightened security in the beleaguered city, ageing machinery grinding toward obsolesce, its tortured wails a taunting reminder of our financially draining dependence on clockwork history." The disc opens with the sound of Big Ben, the most popular choice, which was captured at street level. Cusack's favourite sound is "a nightingale singing against the hum of an electricity substation", admiring the juxtaposition between "the birdsong and the crass, everyday urban hum." He also singled out the sounds of Brick Lane for clearly presenting the area's strong Bangladeshi community. Toop's own choices were the spatial sounds of distant emergency sirens at night and the high-pitched croaks of swifts, whose appearances throughout the year provoke "seasonal nostalgia". Drummer Charles Hayward's choice was "the Deptford Grid electricity sub-station at the edge of the Thames, a saturating drone washed by waves from the river." Hayward admired how people can walk through the overtones, and appreciates "the strange conjunction of that and the sounds of the river, and the sounds of people walking through the pebbles." The sound of an espresso machine was described by Walters as "a delicious, drawn-out sequence of clunks and drips, hissing and explosive boiling recorded in close-up, fetishistic detail"; it is introduced by its nominator as "not specifically a London sound". On the track "Deptford Market", Cusack is heard explaining to a woman that he is recording "all the clanging as you take all the stuff down." Release and reception Your Favourite London Sounds was released as a CD in November 2001 by the LMC's eponymous label. The booklet lists further responses which are not captured on the recording, including "a baby laughing on the Underground", "my boyfriend's orgasms and I love yous" and "none, I war earplugs." Despite the humdrum nature of its contents, the CD was critically praised by reviewers and inspired newspaper articles and radio discussions. Cusack commented that the release had received "far more interest than any of my musical work has ever had". He was pleased with the final recording, commenting: "It was obvious from the responses that people did listen in a lot of detail." In his review for The Guardian, Walters called Your Favourite London Sounds a "strangely comforting" and "pleasingly mundane" disc with a concept that is theoretically endless. He noted that the LMC consider it to be an "audio postcard" and believed it would sell strongly in tourist shops across London, adding: "It's a Christmas present for homesick émigrés; a souvenir; a generous sample library; a versatile source of filler material for radio schedulers; an audio document of contemporary urban life. It will bring a smile of recognition to many harassed city-dwellers." He named it the newspaper's "CD of the week." In the New York Press, Goldsmith believed the group of sounds to be somewhat uninteresting, with subjective selections that suggest the participants were "not really thinking about how to define the ultimate sound of London." However, he added that as the sounds they choice were those they "encountered in their day-to-day routine", the resulting release is "a more realistic sonic picture of the city than you would get from a promotional or commercial project that tried to describe a city." He also credited the project with allowing him to notice more sounds in his native New York City. The Wire included the album in the "Outer Limits" section of their list of the best records of 2001. Legacy The London project was ultimately the beginning of what became Cusack's ongoing Favourite Sounds Project, which later explored diverse cities including Manchester, Birmingham, Southend-on-Sea, Prague, Berlin, Toronto and Beijing. As with the London instalment, all its successors focused on discovering what locals find positive about their cities and neighbourhoods and how they interact with them. In The Bloomsbury Handbook of the Anthropology of Sound (2021), Sam Auinger and Dietmar Offenhuber single out the London edition for reflecting "changes in the sensory qualities of the city as a result of the expansion of the Thames shore into a recreational area." Frohlich, in his 2004 book Audiophotography: Bringing Photos to Life with Sounds, considers Your Favourite London Sounds to have been part of the emergence of sound-based artistic projects which explore the sentimental value of sound, projects which reveal "a rich set of meanings and preferences for particular kinds of sounds." Clive Bell of Variant magazine considers the recording of Deptford Creek to have been "particularly memorable" for bringing the power station hum with the sound of the Thames. Emily Nunn of The Chicago Tribune considers the London project to be the apogee of Cusack's field recording work, and notes Favourite Sounds of Beijing (2007) as a sequel. She believed such projects were "somewhat old-fashioned" by 2006 standards, but noted the London disc "sold well enough to pay for the project." Your Favourite London Sounds also inspired Jesse Seay's Your Favourite Chicago Sounds (2006), an online public archive of Chicago sounds that Cusack helped organise. Track listing "Big Ben" – 0:51 "London Bridge Station" – 2:07 "Brixton Station" – 1:49 "'Mind The Gap', Bank Underground Station" – 0:49 "The Bank of England, 1.00AM" – 1:36 "Blackbird Dawn Chorus, 4.00AM in May" – 2:05 "Brick Lane" – 1:51 "Bagel Shop, Brick Lane" – 1:22 "The Bell on the 73 Bus" – 0:25 "Bus Pressure" – 1:00 "Butlers Wharf: Thames Sounds" – 2:01 "Canal Towpath Stones" – 2:01 "Club Queue, Hoxton" – 1:47 "Coffee, Soho" – 1:39 "Dalston Market" – 1:29 "Deptford Market" – 2:09 "Deptford Grid Electricity Sub-Station"– 2:20 "Regent's Park to Oxford Circus" – 5:00 "Escalator, King's Cross Underground Station" – 0:40 "Euston Main Line Railway Station" – 1:21 "Slamming Doors, Victoria Station" – 1:36 "Evening Birds in Abney Park Cemetery, Early May" – 1:59 "Michelle's Phone Message" – 0:58 "Fountain in Victoria Park, 1.00AM" – 1:25 "The Great Court of the British Museum " – 2:09 "Helicopter/East London Mosque" – 5:51 "Key in Door" – 0:36 "Onions Frying in My Flat" – 1:10 "Post Through Letterbox" – 1:20 "Nightingale/Hum" – 0:56 "London Thunder" – 4:09 "Rain on Skylight While Lying in Bed" – 1:31 "Bleeps at the Supermarket Checkout" – 0:46 "Tottenham Hotspurs Football Club, White Hart Lane" – 2:55 "St James's Park: Two Species of Baby" – 0:59 "Under the Flyover, Hackney Wick" – 1:06 "Taxis Waiting at Euston Station" – 0:59 "LRT Transformer, Putney" – 1:40 "Swifts Over Stoke Newington" – 0:54 "16th Floor Up" – 2:35 Personnel Adapted from the liner notes of Your Favourite London Sounds. Peter Cusack – compiling, editing, recording (all tracks except 3, 10, 20, 34 and 38) Matthias Kispert – recording ("Brixton Station") Tom Wallace – recording ("Bus Pressure") Bunny Schendler recording ("Euston Main Lin Railway Station") Clive Bell – recording ("Tottenham Hotspurs Football Club, White Hart Lane") Syngen Brown – recording ("LRT Transformer, Putney") Tom Brake – voice ("Coffee, Soho") Evrah – idea ("LRT Transformer, Putney") Dave Mandl – photography Ed Baxter – booklet design References 2001 albums Field recording Sound collage albums Sound effects albums Culture in London Arts in London Works about London London in popular culture
Your Favourite London Sounds
Engineering
3,197
37,338,762
https://en.wikipedia.org/wiki/Mathematical%20physiology
Mathematical physiology is an interdisciplinary science. Primarily, it investigates ways in which mathematics may be used to give insight into physiological questions. In turn, it also describes how physiological questions can lead to new mathematical problems. The field may be broadly grouped into two physiological application areas: cell physiology – including mathematical treatments of biochemical reactions, ionic flow and regulation of function – and systems physiology – including electrocardiology, circulation and digestion. References Mathematical and theoretical biology Physiology Systems biology
Mathematical physiology
Mathematics,Biology
95
12,747,972
https://en.wikipedia.org/wiki/Reduct
In universal algebra and in model theory, a reduct of an algebraic structure is obtained by omitting some of the operations and relations of that structure. The opposite of "reduct" is "expansion". Definition Let A be an algebraic structure (in the sense of universal algebra) or a structure in the sense of model theory, organized as a set X together with an indexed family of operations and relations φi on that set, with index set I. Then the reduct of A defined by a subset J of I is the structure consisting of the set X and J-indexed family of operations and relations whose j-th operation or relation for j ∈ J is the j-th operation or relation of A. That is, this reduct is the structure A with the omission of those operations and relations φi for which i is not in J. A structure A is an expansion of B just when B is a reduct of A. That is, reduct and expansion are mutual converses. Examples The monoid (Z, +, 0) of integers under addition is a reduct of the group (Z, +, −, 0) of integers under addition and negation, obtained by omitting negation. By contrast, the monoid (N, +, 0) of natural numbers under addition is not the reduct of any group. Conversely the group (Z, +, −, 0) is the expansion of the monoid (Z, +, 0), expanding it with the operation of negation. References Universal algebra Mathematical relations Model theory
Reduct
Mathematics
332
8,031,376
https://en.wikipedia.org/wiki/Astilbe%20Arendsii%20Group
Astilbe Arendsii Group (Astilbe × arendsii) is a cultivar group of complex hybrids with A. astilboides, A. chinensis, A. japonica, A. thunbergii and others. They are all perennial, herbaceous plants with flowers in various shades from white to purplish red. Numerous cultivars exist, a majority of them produced by breeders in Germany and Holland. The name is derived from the surname of German horticulturist Georg Arends who was responsible for nearly all hybrid cultivars sold in North America. Invalid names: Astilbe ×arendsii Arends Astilbe ×hybrida Ievinya & Lusinya References Other websites Photos Arendsii Group Hybrid plants Ornamental plant cultivars
Astilbe Arendsii Group
Biology
171
14,439,920
https://en.wikipedia.org/wiki/LGR4
Leucine-rich repeat-containing G-protein coupled receptor 4 is a protein that in humans is encoded by the LGR4 gene. LGR4 is known to have a role in the development of the male reproductive tract, eyelids, hair and bone. Mutations in this gene have been associated to osteoporosis (doi:10.1038/nature12124). References Further reading G protein-coupled receptors
LGR4
Chemistry
90