id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
3,018,168
https://en.wikipedia.org/wiki/Kleptolagnia
Kleptolagnia (from Greek kleptein meaning "to steal", and lagnia meaning "sexual excitement") is the state of being sexually aroused by theft. A kleptolagniac is a person aroused by the act of theft. It is also known as kleptophilia, and is a sexual form of kleptomania. See also Chremastistophilia References Sexual fetishism Theft
Kleptolagnia
[ "Biology" ]
96
[ "Behavior", "Sexuality stubs", "Sexuality" ]
3,018,251
https://en.wikipedia.org/wiki/Beryllium%20oxide%20%28data%20page%29
This page provides supplementary chemical data on beryllium oxide. Material Safety Data Sheet Beryllium Oxide MSDS from American Beryllia Structure and properties Thermodynamic properties Spectral data References Chemical data pages Chemical data pages cleanup
Beryllium oxide (data page)
[ "Chemistry" ]
48
[ "Chemical data pages", "nan" ]
3,018,300
https://en.wikipedia.org/wiki/Bismuth%28III%29%20oxide%20%28data%20page%29
This page provides supplementary chemical data on bismuth(III) oxide. Material Safety Data Sheet MSDS from Fischer Scientific Structure and properties Thermodynamic properties Spectral data References Chemical data pages Chemical data pages cleanup
Bismuth(III) oxide (data page)
[ "Chemistry" ]
45
[ "Chemical data pages", "nan" ]
3,018,439
https://en.wikipedia.org/wiki/Frederick%20P.%20Salvucci
Frederick "Fred" Peter Salvucci (born April 8, 1940) is an American civil engineer and educator, who specializes in transportation issues. Salvucci was the Secretary of Transportation for the Commonwealth of Massachusetts under Governor Michael Dukakis, serving a total of 12 years. He is currently a Senior Lecturer at the MIT Center for Transportation and Logistics. Career Born in Brighton, Salvucci graduated from Boston Latin School in 1957. He then attended the Massachusetts Institute of Technology, where he received his Bachelor of Science in 1961 and his Master of Science in 1962, both in civil engineering with a specialization in transportation. At MIT, he was a member of Chi Epsilon and the American Society of Civil Engineers. From 1964 to 1965, he spent a year abroad as a Fulbright Scholar at the University of Naples Federico II, where he studied investments in transportation to stimulate economic development in high poverty areas of Southern Italy. From 1970 to 1974, Salvucci served as a transportation advisor to Boston's mayor, Kevin White. He subsequently served two terms as Massachusetts Secretary of Transportation under Michael Dukakis from 1975 to 1978 and 1983 to 1990. During his tenure, he gave particular emphasis to the expansion of the transit system, the development of financial and political support for the Big Dig, and the implementation of strategies in compliance with the Clean Air Act. Other efforts included the extension of the Red Line to Quincy and Alewife, the relocation of the Orange Line to the Southwest Corridor, the acquisition and modernization of MBTA Commuter Rail, the restructuring of the MBTA, and planning for the redevelopment of Park Square by placing the State Transportation Building there. During 1994 to 2003, Salvucci was a key developer, in collaboration with professor Nigel Wilson (MIT), of an innovative research and educational collaborative with the University of Puerto Rico and the Puerto Rico Highway and Transportation Authority, focused on development of Tren Urbano, a new rail transit system for San Juan, which has now been replicated in Chicago, London, Hong Kong, and San Sebastian (Spain), as well as the MBTA. Salvucci has also participated in restructuring the commuter rail and urban transit system in Buenos Aires, Argentina to use concession contracts with the private sector to renew the physical capital of the rail systems, and to improve passenger service. One of Salvucci's best-known projects is the "Big Dig" (Central Artery/Tunnel Project) in Boston that put an above-ground expressway underground, created a third Harbor Tunnel, and rejuvenated downtown Boston into an even more vibrant district. Salvucci created the vision, persuaded politicians on both state and national levels, and obtained the funds to complete this megaproject. , Salvucci is partly retired, but is still involved in the Allston Multimodal Project in Massachusetts. In the past, he has taught urban planning courses on transportation at MIT, and also at Harvard University's Graduate School of Design. References External links MIT profile Frederick P. Salvucci oral history, June 02, 2016 | Northeastern University Library 1940 births Living people American people of Italian descent People from Boston Boston Latin School alumni American Society of Civil Engineers University of Naples Federico II Massachusetts Secretaries of Transportation MIT School of Architecture and Planning faculty Harvard Graduate School of Design faculty
Frederick P. Salvucci
[ "Engineering" ]
661
[ "American Society of Civil Engineers", "Civil engineering organizations" ]
3,018,887
https://en.wikipedia.org/wiki/Mautner%27s%20lemma
Mautner's lemma in representation theory, named after Austrian-American mathematician Friederich Mautner, states that if G is a topological group and π a unitary representation of G on a Hilbert space H, then for any x in G, which has conjugates yxy−1 converging to the identity element e, for a net of elements y, then any vector v of H invariant under all the π(y) is also invariant under π(x). References F. Mautner, Geodesic flows on symmetric Riemannian spaces (1957), Ann. Math. 65, 416-430 Unitary representation theory Topological groups Theorems in representation theory Lemmas in group theory
Mautner's lemma
[ "Mathematics" ]
149
[ "Algebra stubs", "Space (mathematics)", "Topological spaces", "Topological groups", "Algebra" ]
3,018,981
https://en.wikipedia.org/wiki/Formal%20moduli
In mathematics, formal moduli are an aspect of the theory of moduli spaces (of algebraic varieties or vector bundles, for example), closely linked to deformation theory and formal geometry. Roughly speaking, deformation theory can provide the Taylor polynomial level of information about deformations, while formal moduli theory can assemble consistent Taylor polynomials to make a formal power series theory. The step to moduli spaces, properly speaking, is an algebraization question, and has been largely put on a firm basis by Artin's approximation theorem. A formal universal deformation is by definition a formal scheme over a complete local ring, with special fiber the scheme over a field being studied, and with a universal property amongst such set-ups. The local ring in question is then the carrier of the formal moduli. References Moduli theory Algebraic geometry Geometric algebra
Formal moduli
[ "Mathematics" ]
168
[ "Mathematical analysis", "Algebraic geometry", "Mathematical analysis stubs", "Fields of abstract algebra", "Geometry", "Geometry stubs" ]
3,019,076
https://en.wikipedia.org/wiki/Cartwheel%20Galaxy
The Cartwheel Galaxy (also known as ESO 350-40 or PGC 2248) is a lenticular ring galaxy about 500 million light-years away in the constellation Sculptor. It has a D25 isophotal diameter of , and a mass of about solar masses; its outer ring has a circular velocity of . It was discovered by Fritz Zwicky in 1941. Zwicky considered his discovery "one of the most complicated structures awaiting its explanation on the basis of stellar dynamics." The Third Reference Catalogue of Bright Galaxies (RC3) measured a D25 isophotal diameter for the Cartwheel Galaxy at about 60.9 arcseconds, giving it a diameter of based on a redshift-derived distance of . This diameter is slightly smaller than that of the Andromeda Galaxy. The large Cartwheel Galaxy is the dominant member of the Cartwheel Galaxy group, consisting of four physically associated spiral galaxies. The three companions are referred to in several studies as G1, the smaller irregular blue Magellanic spiral; G2, the yellow compact spiral with a tidal tail; and G3, a more distant spiral often seen in wide field images. One supernova has been observed in the Cartwheel Galaxy. SN 2021afdx (type II, mag. 18.8) was discovered by ATLAS on 23 November 2021. Structures The structure of the Cartwheel Galaxy is noted to be highly complicated and heavily disturbed. The Cartwheel consists of two rings: the outer ring, the site of massive ongoing star formation due to gas and dust compression; and the inner ring that surrounds the galactic center. A ring of dark absorbing dust is also present in the nucleic ring. Several optical arms or "spokes" are seen connecting the outer ring to the inner. Observations show the presence of both non-thermal radio continuum and optical spokes, but the two do not seem to overlap. Evolution The galaxy was once a normal spiral galaxy before it apparently underwent a head-on "bullseye" style collision with a smaller companion approximately 200–300 million years prior to how we see the system today. When the nearby galaxy passed through the Cartwheel Galaxy, the force of the collision caused a powerful gravitational shock wave to expand through the galaxy. Moving at high speed, the shock wave swept up and compressed gas and dust, creating a Starburst region around the galaxy's center portion that went unscathed as it expanded outwards. This explains the bluish ring around the center, which is the brighter portion. It can be noted that the galaxy is beginning to retake the form of a normal spiral galaxy, with arms spreading out from a central core. These arms are often referred to as the cartwheel's “spokes”. Alternatively, a model based on the gravitational Jeans instability of both axisymmetric (radial) and nonaxisymmetric (spiral) small-amplitude gravity perturbations allows an association between growing clumps of matter and the gravitationally unstable axisymmetric and nonaxisymmetric waves which take on the appearance of a ring and spokes. Based on observational data, however, this theory of ring galaxy evolution does not appear to apply to this specific galaxy. While most images of the Cartwheel display three galaxies close together, a fourth physically associated companion (also known as G3) is known to be associated with the group through an HI (or neutral hydrogen) tail that connects G3 to the cartwheel. Due to the presence of the HI tail, it is widely believed that G3 is the "bullet" galaxy that plunged through the disk of the cartwheel, creating its current shape, not G1 or G2. This hypothesis makes sense given the size and predicted age of the current structure (~300 million years old as mentioned before). Considering how close G1 and G2 are to the Cartwheel still, it is much more widely believed that the roughly 88 kpc (~287,000 light years) distant G3 is the intruding galaxy. HI tail mapping is extremely useful in determining “culprit” galaxies in similar cases where the solution is relatively unclear. Hydrogen gas, being the lightest and most abundant gas in galaxies, is easily torn away from parent galaxies through gravitational forces. Evidence of this can be seen in the Jellyfish Galaxy and the Comet Galaxy, which are undergoing a type of gravitational effect called ram pressure stripping, and other galaxies with tidal tails and star forming stellar streams associated with collisions and mergers. Ram pressure stripping will almost always cause trailing-dominant tails of HI gas as a galaxy infalls into a galaxy cluster, while mergers and collisions like the ones involving in Cartwheel galaxy often create leading-dominant tails as the culprit galaxy’s gravity attracts and pulls on the victim galaxy’s gas in the direction of the culprit's motion. The existing structure of the cartwheel is expected to disintegrate over the next few hundred million years as the remaining gas, dust and stars that haven’t escaped the galaxy begin to infall back towards the center. It is likely that the galaxy will regain a spiral shape after the infall process completes and spiral density waves have a chance to reform. This is only possible if companions G1, G2 and G3 remain distant and do not undergo an additional collision with the cartwheel. X-ray sources The unusual shape of the Cartwheel Galaxy may be due to a collision with a smaller galaxy such as one of those in the lower left of the image. The most recent starburst has lit up the Cartwheel rim, which has a diameter larger than that of the Milky Way. Star formation via starburst galaxies, such as the Cartwheel Galaxy, results in the formation of large and extremely luminous stars. When massive stars explode as supernovas, they leave behind neutron stars and black holes. Some of these neutron stars and black holes have nearby companion stars, and become powerful sources of X-rays as they pull matter off their companions (also known as ultra and hyperluminous X-ray sources). The brightest X-ray sources are likely black holes with companion stars, appearing as the white dots that lie along the rim of the X-ray image. The Cartwheel contains an exceptionally large number of these black hole binary X-ray sources, because many massive stars formed in the ring. References External links Galaxy Evolution Simulation:The Cartwheel Galaxy Cartwheel Galaxy at Constellation Guide Webb Captures Stellar Gymnastics in The Cartwheel Galaxy nasa.gov Lenticular galaxies Peculiar galaxies Ring galaxies Sculptor (constellation) 02248 Astronomical objects discovered in 1941
Cartwheel Galaxy
[ "Astronomy" ]
1,346
[ "Constellations", "Sculptor (constellation)" ]
3,019,112
https://en.wikipedia.org/wiki/Trifluoroacetic%20acid
Trifluoroacetic acid (TFA) is a synthetic organofluorine compound with the chemical formula CF3CO2H. It is a haloacetic acid, with all three of the acetyl group's hydrogen atoms replaced by fluorine atoms. It is a colorless liquid with a vinegar-like odor. TFA is a stronger acid than acetic acid, having an acid ionisation constant, Ka, that is approximately 34,000 times higher, as the highly electronegative fluorine atoms and consequent electron-withdrawing nature of the trifluoromethyl group weakens the oxygen-hydrogen bond (allowing for greater acidity) and stabilises the anionic conjugate base. TFA is commonly used in organic chemistry for various purposes. Synthesis TFA is prepared industrially by the electrofluorination of acetyl chloride or acetic anhydride, followed by hydrolysis of the resulting trifluoroacetyl fluoride: + 4 → + 3 + + → + Where desired, this compound may be dried by addition of trifluoroacetic anhydride. An older route to TFA proceeds via the oxidation of 1,1,1-trifluoro-2,3,3-trichloropropene with potassium permanganate. The trifluorotrichloropropene can be prepared by Swarts fluorination of hexachloropropene. Uses TFA is the precursor to many other fluorinated compounds such as trifluoroacetic anhydride, trifluoroperacetic acid, and 2,2,2-trifluoroethanol. It is a reagent used in organic synthesis because of a combination of convenient properties: volatility, solubility in organic solvents, and its strength as an acid. TFA is also less oxidizing than sulfuric acid but more readily available in anhydrous form than many other acids. One complication to its use is that TFA forms an azeotrope with water (b. p. 105 °C). TFA is used as a strong acid to remove protecting groups such as Boc used in organic chemistry and peptide synthesis. At a low concentration, TFA is used as an ion pairing agent in liquid chromatography (HPLC) of organic compounds, particularly peptides and small proteins. TFA is a versatile solvent for NMR spectroscopy (for materials stable in acid). It is also used as a calibrant in mass spectrometry. TFA is used to produce trifluoroacetate salts. Safety Trifluoroacetic acid is a strong acid. TFA is harmful when inhaled, causes severe skin burns and is toxic for aquatic organisms even at low concentrations. Skin burns are severe, heal poorly and can be necrotic. Vapour fumes have an LC50 of 10.01 mg/L, tested on rats over 4 hours. Inhalation symptoms include mucus irritation, coughing, shortness of breath and possible formation of oedemas in the respiratory tract. Exposure damages the kidneys. Environment Although trifluoroacetic acid is not produced biologically or abiotically, it is a metabolic breakdown product of the volatile anesthetic agent halothane. It is also thought to be responsible for halothane-induced hepatitis. It also may be formed by photooxidation of the commonly used refrigerant 1,1,1,2-tetrafluoroethane (R-134a). Moreover, it is formed as an atmospheric degradation product of almost all fourth-generation synthetic refrigerants, also called hydrofluoroolefins (HFO), such as 2,3,3,3-tetrafluoropropene. Trifluoroacetic acid degrades very slowly in the environment and has been found in increasing amounts as a contaminant in water, soil, food, and the human body. Median concentrations of a few micrograms per liter have been found in beer and tea. Seawater can contain about 200 ng of TFA per liter. Biotransformation by decarboxylation to fluoroform has been discussed. Trifluoroacetic acid is mildly phytotoxic. See also Fluoroacetic acidhighly toxic but naturally occurring rodenticide CH2FCOOH Difluoroacetic acid Trichloroacetic acid, the chlorinated analog Trifluoroacetone – also abbreviated TFA References Perfluorocarboxylic acids Reagents for organic chemistry Organic compounds with 2 carbon atoms
Trifluoroacetic acid
[ "Chemistry" ]
991
[ "Organic compounds", "Reagents for organic chemistry", "Organic compounds with 2 carbon atoms" ]
3,019,343
https://en.wikipedia.org/wiki/Euryapsida
Euryapsida is a polyphyletic (unnatural, as the various members are not closely related) group of sauropsids that are distinguished by a single temporal fenestra, an opening behind the orbit, under which the post-orbital and squamosal bones articulate. They are different from Synapsida, which also have a single opening behind the orbit, by the placement of the fenestra. In synapsids, this opening is below the articulation of the post-orbital and squamosal bones. It is now commonly believed that euryapsids (particularly sauropterygians) are in fact diapsids (which have two fenestrae behind the orbit) that lost the lower temporal fenestra. Euryapsids are usually considered entirely extinct, although turtles might be part of the sauropterygian clade while other authors disagree. Euryapsida may also be a synonym of Sauropterygia sensu lato. The ichthyosaurian skull is sometimes described as having a metapsid (or parapsid) condition instead of a truly euryapsid one. In ichthyosaurs, the squamosal bone is never part of the fenestra's margin. Parapsida was originally a taxon consisting of ichthyosaurs, squamates, protorosaurs, araeoscelidans and pleurosaurs. Historically, a variety of reptiles with upper fenestrae, either alone or with a lower emargination, have been considered euryapsid or parapsid, and to have had their patterns of fenestration originate separately from those of diapsids. This includes araeoscelidans, mesosaurs, squamates, pleurosaurids, weigeltisaurids, protorosaurs, and trilophosaurs. With the exception of mesosaurs, which only have the lower temporal opening, all of these are universally agreed to be diapsids which either secondarily closed the lower opening (araeoscelids, trilophosaurs) or lost the lower bar (squamates, pleurosaurs, protorosaurs). Euryapsida was proposed by Edwin H. Colbert as a substitute for the earlier term Synaptosauria, originally created by Edward D. Cope for a taxon including sauropterygians, turtles and rhynchocephalians. Baur removed the rhynchocephalians from Synaptosauria and Williston later resurrected the taxon, including only Sauropterygia (Nothosauria and Plesiosauria) and Placodontia in it. The terms Enaliosauria and Halisauria have also been used for a taxon including ichthyosaurs and sauropterygians. Some 21st century studies have found that ichthyosaurs, thalattosaurs and sauropterygians were close relatives, either as stem-archosaurs or as stem-saurians. See also Anapsida Diapsida Synapsida References Polyphyletic groups Prehistoric marine reptiles Prehistoric reptile taxonomy
Euryapsida
[ "Biology" ]
666
[ "Phylogenetics", "Polyphyletic groups" ]
3,019,772
https://en.wikipedia.org/wiki/First%20Battle%20of%20Sirte
The First Battle of Sirte was fought between forces of the British Mediterranean Fleet and the (Italian Royal Navy) during the Battle of the Mediterranean in the Second World War. The engagement took place on 17 December 1941, south-east of Malta, in the Gulf of Sirte. The engagement was inconclusive as both forces were protecting convoys and wished to avoid battle. In the following days, two Royal Navy forces based at Malta ran into the Italian Minefield T off Tripoli and two British battleships were disabled by Italian manned torpedoes during the Raid on Alexandria. By the end of December, the balance of naval power in the Mediterranean had shifted in favour of the . Background The British Eighth Army and the Axis armies in North Africa were engaged in battles resulting from Operation Crusader, which had been fought between 18 November and 4 December. Its aim was to defeat the Afrika Korps and relieve the siege of Tobruk. This had been achieved and Axis forces were conducting a fighting retreat; by 13 December, they were holding a defensive line at Gazala, east of Benghazi. The Axis were desperate to supply their forces, intending to transport stores to Tripoli, their main port in Libya and Benghazi, the port closest to the front line. The island garrison of Malta was under siege and the British wanted to supply their forces on the island. Prelude Convoy M41/M42 The Italians were preparing to send Convoy M41, of eight ships, to Africa on 13 December 1941. That morning, their previous supply attempt, two fast cruisers carrying fuel to Tripoli, had failed when they were sunk at the Battle of Cape Bon by a force of destroyers en route to Alexandria. The eight merchant ships were in three groups, with a close escort of five destroyers and a distant cover force of the battleships and , four destroyers and two torpedo boats. Soon after sailing on 13 December, a group of Convoy M41 was attacked by the British submarine and two ships were sunk; later that day two ships collided and had to return to base, while the distant cover force was sighted by the submarine and Vittorio Veneto was torpedoed and forced to return to port. Supermarina, the high command of the Italian navy, rattled by these losses and a report that a British force of two battleships was at sea, ordered the ships to return to await reinforcement but the "force of two battleships" was a decoy operation by the minelayer . On 16 December, the four-ship Italian convoy, renamed Convoy M42, left Taranto, picking up escorts along the way. The close escort was provided by seven destroyers and a torpedo boat; by the time they reached Sicily they were also accompanied by a close cover force, comprising the battleship , three light cruisers and three destroyers. The distant covering force consisted of the battleships Littorio, and , two cruisers and 10 destroyers. Allied convoy The British planned to run supplies to Malta using the fast merchant ship Breconshire, covered by a force of cruisers and destroyers, while the destroyers from the Cape Bon engagement would proceed to Alexandria from Malta covered by Force K and Force B from Malta on 15 December. The British force was depleted when the light cruiser was torpedoed and sunk by , just before midnight on 14 December. U-557 was accidentally sunk less than 48 hours later, by the Italian torpedo boat Orione. On 15 December, Breconshire sailed from Alexandria escorted by three cruisers and eight destroyers under Rear-Admiral Philip Vian in . On 16 December, the four destroyers of 4th Flotilla (Commander G. Stokes in ) left Malta, covered by Force K (Captain W. G. "Bill" Agnew in ), two cruisers and two destroyers. Thirty Italian warships were escorting four cargo ships. The two British groups were also at sea and steaming toward each other; the opposing forces were likely to cross each other's tracks east of Malta on 18 December. Battle On 17 December, an Italian reconnaissance aircraft spotted the British westbound formation near Sidi Barrani, apparently proceeding from Alexandria to intercept the Italian convoy. The British convoy was shadowed by Axis aeroplanes and attacked during the afternoon but no hits were scored and Agnew and Stokes met the westbound convoy. By late afternoon the Italian fleet was close by and spotter planes from the battleships had made contact with the British convoy, but the planes misidentified Breconshire as a battleship. At 17:42, the fleets sighted each other; Admiral Angelo Iachino—commander of the Italian forces—moved to intercept to defend his convoy. Vian also wished to avoid combat, so with the British giving ground and the Italians pursuing with caution, the British were easily able to avoid an engagement. Just after sunset, an air attack on the British ships caused them to return fire with their anti-aircraft guns, allowing the Italian naval force to spot them. Iachino took in the distant covering force and opened fire at about , well out of range of the British guns. Vian immediately laid smoke and moved to the attack while Breconshire moved away, escorted by the destroyers and . Lacking radar and mindful of their defeat in the night action at the Battle of Cape Matapan, the Italians wished to avoid a night engagement. The Italians fired for only 15 minutes before disengaging and returning westwards to cover convoy M42. suffered the loss of one midshipman and some damage due to a near-miss either from an shell, possibly fired by the Italian cruiser or as stated by British official reports by shell splinters from Andrea Doria and Giulio Cesare, that knocked down wireless aerials and holed the hull, superstructure and ship's boats. According to Italian sources, the Royal Australian Navy (RAN) destroyer was also damaged by near-misses from the . British reports tell of other warships punctured by splinters. Aftermath Minefield T After dark, Vian turned to return with Stokes to Alexandria, leaving Agnew to bring Breconshire to Malta, joined by Force B, one cruiser (the other was under repair) and two destroyers. Breconshire and her escorts arrived in Malta at 15:00 on 18 December. At midday, the Italian force also split up and three ships headed for Tripoli, accompanied by the close cover force, while the German supply ship Ankara, headed for Benghazi. The distant cover force remained on station in the Gulf of Sidra until evening, before heading back to base. The British had now realised that the Italians had a convoy in the area; Vian searched for it without success as he returned to Alexandria. In the afternoon, the position of the Tripoli group was established; a cruiser and two destroyers of Force B and two cruisers and two destroyers of Force K (Captain O'Conor, on the cruiser ) sortied from Malta at 18:00 to intercept. The force ran into a minefield (Minefield T) off Tripoli, in the early hours of 19 December. The minefield took the British by surprise as the water-depth was , which they had thought was too deep for mines. Neptune struck four mines and sank, the destroyer struck a mine and was scuttled the following day. The cruisers Aurora and were badly damaged but were able to return to Malta. About 830 Allied seamen, many of them New Zealanders from Neptune, were killed. The Malta Strike Force, which had been such a threat to Axis shipping to Libya during most of 1941, was much reduced in its effectiveness and was later forced to withdraw to Gibraltar. Attack on Alexandria While steaming back to Alexandria along with Vian's force, destroyer reported an apparently successful depth-charge attack on an unidentified submarine. The only axis submarine off Alexandria was the Italian , which was carrying a group of six Italian frogmen commandos, including Luigi Durand De La Penne, equipped with manned torpedoes. Shortly after Vian's force arrived in Alexandria, on the night of 18 December, the Italians penetrated the harbour and attacked the fleet. Jervis was damaged, a large Norwegian tanker disabled and the battleships and were severely damaged. This was a strategic change of fortune against the Allies whose effects were felt in the Mediterranean for several months. Results Both sides achieved their strategic objectives; the British got supplies through to Malta and the Axis got their ships through to Tripoli and Benghazi, although Benghazi fell to the Eighth Army five days later, on 24 December. Order of battle Forces present 17 December 1941 Italy Admiral Angelo Iachino (on Littorio) Close covering force – Vice Admiral Raffaele de Courten (on Duca d'Aosta) One battleship: Three light cruisers (7a Divisione Incrociatori): , , Three destroyers, , , and Distant covering force – Vice Admiral Angelo Parona (on Gorizia) Three battleships: , and Two heavy cruisers: , and 10 destroyers, , (9a Squadriglia Cacciatorpediniere) (10a Squadriglia Cacciatorpediniere) , (12a Squadriglia Cacciatorpediniere) , , , (13a Squadriglia Cacciatorpediniere) (16a Squadriglia Cacciatorpediniere) Close escort: Six destroyers: (7a Squadriglia Cacciatorpediniere) , (14a Squadriglia Cacciatorpediniere) , (15a Squadriglia Cacciatorpediniere) (16a Squadriglia Cacciatorpediniere) Convoy M42 motorships Monginevro, Napoli, Vettor Pisani freighter Ankara (German) Allies Convoy Escort – Rear-Admiral Philip Vian (on Naiad) Three light cruisers: , , Eight destroyers, , , , (damaged), (damaged), , and (14th Destroyer Flotilla) Convoy Fast merchantman: Breconshire Force K Two light cruisers: , Two destroyers , Force B One cruiser: Two destroyers: , 4th Destroyer Flotilla Four destroyers, , , , See also Second Battle of Sirte Notes References Further reading External links La I Battaglia della Sirte Prima battaglia della Sirte – Plancia di Commando The Italian Navy in World War II Sirte 1941 in Libya Sirte, First Sirte Sirte Sirte Sirte, First Gulf of Sidra Maritime incidents in Libya Sirte, First Battle of Sirte Sirte December 1941
First Battle of Sirte
[ "Engineering" ]
2,111
[ "Military engineering", "Mine warfare" ]
3,019,805
https://en.wikipedia.org/wiki/Steel%20detailer
A steel detailer is a person who produces detailed drawings for steel fabricators and steel erectors. The detailer prepares detailed plans, drawings and other documents for the manufacture and erection of steel members (columns, beams, braces, trusses, stairs, handrails, joists, metal decking, etc.) used in the construction of buildings, bridges, industrial plans, and nonbuilding structures. Steel detailers (usually simply called detailers within their field) work closely with architects, engineers, general contractors and steel fabricators. They usually find employment with steel fabricators, engineering firms, or independent steel detailing companies. Steel detailing companies and self-employed detailers subcontract primarily to steel fabricators and sometimes to general contractors and engineers. Training and certification United States Collegiate degree programs specific to structural steel detailing are rare to nonexistent in the U.S., but more general degree and certification programs may be found with curricula pertaining to design, manual or computer-aided drafting in general, or specific computer-aided drafting software. A college degree is not required to become a steel detailer in the U.S. Training is usually provided on the job, with a new trainee usually needing about five years of practice under an experienced detailer to become proficient with all of the requirements of the trade. Practitioners of this occupation in the U.S. may range from degreed, and possibly licensed, civil/structural engineers to those with little or no formal academic training who nevertheless possess extensive industry experience. Certification of structural steel detailers is not required in the United States. The National Institute of Steel Detailing (NISD) offers a selection of certification programs for steel detailers and detailing companies, but these are strictly voluntary. Canada In Vancouver, British Columbia, Canada there are College courses specifically for Steel Detailing. Vancouver Community College Downtown Campus has been offering a Steel Detailing Certificate for many years. It is approximately a one-year program. BCIT (British Columbia Institute of Technology) also offers training. Many of the most well trained Steel Detailers in British Columbia have attended these institutions. Responsibilities A steel detailer prepares two primary types of drawings: erection drawings and shop drawings. Erection drawings are used to guide the steel erector on the construction site ("in the field") as to where and how to erect the fabricated steel members. These drawings usually show dimensioned plans to locate the steel members, and they often also show details with specific information and requirements, including all work that must be done in the field (such as bolting, welding or installing wedge anchors). Since the erection drawings are intended for use in the field, they contain very little specific information about the fabrication of any individual steel member; members should already be completed by the time the erection drawings are used. Shop drawings, also called detail drawings, are used to specify the exact detailing requirements for fabricating each individual member (or "piece") of a structure, and are used by the steel fabricator to fabricate these members. Complete shop drawings show material specifications, member sizes, all required dimensions, welding, bolting, surface preparation and painting requirements, and any other information required to describe each completed member. The shop drawings are intended for use by the fabrication shop, and thus contain little or no information about the erection and installation of the steel members they depict; this information belongs in the erection drawings. The detailer must comply with the requirements of the design drawings and with all industry standards and protocols, such as those established by the American Institute of Steel Construction (AISC) and the American Welding Society (AWS). The detailer is usually not responsible for design, including structural strength, stiffness, and stability (which are the responsibility of the structural engineer), major dimensions of the structure and compliance with relevant building codes (which are the responsibility of the architect). A detailer is generally required to submit his drawings to the structural engineer and/or architect for review prior to the release of drawings for fabrication. However, to complete his drawings, an experienced steel detailer usually suggests connections subject to the approval of the structural engineer in cases where the structural drawings have insufficient information. In these situations, the steel detailer is guided by his experience and knowledge of existing engineering codes such as the Steel Construction Manual published by AISC. In the case of non-building structures there is typically no architect, and detail drawings are reviewed exclusively by the structural engineer of record. This design review ideally assures engineering accuracy and compliance with the design intent. Techniques Traditionally, steel detailing was accomplished via manual drafting methods, using pencils, paper, and drafting tools such as a parallel bar or drafting machine, triangles, templates of circles and other useful shapes, and mathematical tables, such as tables of logarithms and other useful calculational aids. Eventually, hand-held calculators were incorporated into the traditional practice. Today, manual drafting has been largely replaced by computer-aided drafting (CAD). A steel detailer using computer-aided methods creates drawings on a computer, using software specifically designed for the purpose, and printing out drawings on paper only when they are complete. Many detailers would add another classification for those using 3-D Modeling applications specifically designed for steel detailing, as the process for the production of drawings using these applications is markedly different from a 2-D drafting approach. The detailer literally builds the project in 3D before producing detailed shop drawings from the model. Structural steel detailing requires skills in drafting, mathematics (including geometry and trigonometry), logic, reasoning, spatial visualization, and communication. A basic knowledge of general engineering principles and the methods of structural and miscellaneous steel fabrication, however acquired, is essential to the practice of this discipline. A computer-aided detailer also requires skills in using computers and an understanding of the specific CAD software used. A detailer's drawings generally go through several phases. If there is any unclear information that would prevent the detailer from creating or completing the drawings accurately, a request for information(RFI) is sent to the relevant trades(typically the general contractor, architect or structural engineer) before proceeding. If the required information is not needed immediately, then the detailer may opt to list the questions on the drawings. Following creation of the drawing, the detailer must usually (as described above) submit a copy of the drawing to the architect and engineer for review ("approval"). Copies of the drawing may be sent to other recipients at this time as well, such as the general contractor, for informational purposes only. The drawing must also be checked for accuracy and completeness by another detailer (for this purpose, the "checker"). To keep track of changes during the drawing creation workflow, the revisions are identified by incrementing an associated number or letter code which should appear in the drawing revision block. Comments arising from approval and corrections made during checking must be resolved, and the original drawing must be updated accordingly (or "scrubbed"). After this, the drawing may be released to the fabricator and/or erector for use in construction. List of steel detailing software See also Drafter Modeler Model Checker Drawing Checker References External links AISC Advance Steel Detailing Software Website Bentley ProSteel Detailing Software Website Parabuild Structural Steel Detailing Software Website SDS/2 Structural Steel Detailing Software Website *[http://www.techfnatic.com/ Structural Steel Detailing Software Website Soft Steel Detailing Software Website SSDCP Structural Steel Detailing Software Website Tekla Structures Steel Detailing Software Website Tekla Structures Interoperability and Formats The Steel Detailer software website Solidworks software website TSteel 3D software website Construction trades workers Structural steel
Steel detailer
[ "Engineering" ]
1,567
[ "Structural engineering", "Structural steel" ]
3,019,875
https://en.wikipedia.org/wiki/Software%20factory
A software factory is a structured collection of related software assets that aids in producing computer software applications or software components according to specific, externally defined end-user requirements through an assembly process. A software factory applies manufacturing techniques and principles to software development to mimic the benefits of traditional manufacturing. Software factories are generally involved with outsourced software creation. Description In software engineering and enterprise software architecture, a software factory is a software product line that configures extensive tools, processes, and content using a template based on a schema to automate the development and maintenance of variants of an archetypical product by adapting, assembling, and configuring framework-based components. Since coding requires a software engineer (or the parallel in traditional manufacturing, a skilled craftsman) it is eliminated from the process at the application layer, and the software is created by assembling predefined components instead of using traditional IDEs. Traditional coding is left only for creating new components or services. As with traditional manufacturing, the engineering is left to creation of the components and the requirements gathering for the system. The end result of manufacturing in a software factory is a composite application. Purpose Software–factory–based application development addresses the problem of traditional application development where applications are developed and delivered without taking advantage of the knowledge gained and the assets produced from developing similar applications. Many approaches, such as training, documentation, and frameworks, are used to address this problem; however, using these approaches to consistently apply the valuable knowledge previously gained during development of multiple applications can be an inefficient and error-prone process. Software factories address this problem by encoding proven practices for developing a specific style of application within a package of integrated guidance that is easy for project teams to adopt. Developing applications using a suitable software factory can provide many benefits, such as improved productivity, quality and evolution capability. Components Software factories are unique and therefore contain a unique set of assets designed to help build a specific type of application. In general, most software factories contain interrelated assets of the following types: Factory Schema: A document that categorizes and summarizes the assets used to build and maintain a system (such as XML documents, models, etc.) in an orderly way, and defines relationships between them. Reference implementation: Provides an example of a realistic, finished product that the software factory helps developers build. Architecture guidance and patterns: Help explain application design choices and the motivation for those choices. How-to topics: Provide procedures and instructions for completing tasks. Recipes: Automate procedures in How-to topics, either entirely or in specific steps. They can help developers complete routine tasks with minimal input. Templates: Pre-made application elements with placeholders for arguments. They can be used for creating initial project items. Designers: Provide information that developers can use to model applications at a higher level of abstraction. Reusable code: Components that implement common functionality or mechanisms. Integration of reusable code in a software factory reduces the requirements for manually written code and encourages reuse across applications. Product development Building a product using a software factory involves the following activities: Problem analysis: Determines whether the product is in the scope of a software factory. The fit determines whether all or some of the product is built with the software factory. Product specification: Defines the product requirements by outlining the differences from the product line requirements using a range of product specification mechanisms. Product design: Maps the differences in requirements to differences in product line architecture and development process to produce a customized process. Product implementation: A range of mechanisms can be used to develop the implementation depending on the extent of the differences. Product deployment: Involves creating or reusing default deployment constraints and configuring the required resources necessary to install the executables being deployed. Product testing: Involves creating or reusing test assets (such as test cases, data sets, and scripts) and applying instrumentation and measurement tools. Benefits Developing applications using a software factory can provide many benefits when compared to conventional software development approaches. These include the following: Consistency: Software factories can be used to build multiple instances of a software product line (a set of applications sharing similar features and architecture), making it easier to achieve consistency. This simplifies governance and also lowers training and maintenance costs. Quality: Using a software factory makes it easier for developers to learn and implement proven practices. Because of the integration of reusable code, developers are able to spend more time working on features that are unique to each application, reducing the likelihood of design flaws and code defects. Applications developed using a software factory can also be verified before deployment, ensuring that factory-specific best practices were followed during development. Productivity: Many application development activities can be streamlined and automated, such as reusing software assets and generating code from abstractions of the application elements and mechanisms. These benefits can provide value to several different teams in the following ways: Value for business Business tasks can be simplified which can significantly increase user productivity. This is achieved through using common and consistent user interfaces that reduce the need for end-user training. Easy deployment of new and updated functionality and flexible user interfaces also allows end users to perform tasks in a way that follows the business workflow. Data quality improvements reduce the need for data exchange between application parts through the ALT+TAB and copy and paste techniques. Value for architects Software factories can be used by architects to design applications and systems with improved quality and consistency. This is achieved through the ability to create a partial implementation of a solution that includes only the most critical mechanisms and shared elements. Known as the baseline architecture, this type of implementation can address design and development challenges, expose architectural decisions and mitigate risks early in the development cycle. Software factories also enable the ability to create a consistent and predictable way of developing, packaging, deploying and updating business components to enforce architectural standards independent of business logic. Value for developers Developers can use software factories to increase productivity and incur less ramp-up time. This is achieved through creating a high-quality starting point (baseline) for applications which includes code and patterns. This enables projects to begin with a higher level of maturity than traditionally developed applications. Reusable assets, guidance and examples help address common scenarios and challenges and automation of common tasks allows developers to easily apply guidance in consistent ways. Software factories provide a layer of abstraction that hides application complexity and separates concerns, allowing developers to focus on different areas such as business logic, the user interface (UI) or application services without in-depth knowledge of the infrastructure or baseline services. Abstraction of common developer tasks and increased reusability of infrastructure code can help boost productivity and maintainability. Value for operations Applications built with software factories result in a consolidation of operational efforts. This provides easier deployment of common business elements and modules, resulting in consistent configuration management across a suite of applications. Applications can be centrally managed with pluggable architecture which allows operations teams to control basic services. Other approaches There are several approaches that represent contrasting views on software factory concepts, ranging from tool oriented to process oriented initiatives. The following approaches cover Japanese, European, and North American initiatives. Industrialized software organization (Japan) Under this approach, software produced in the software factory is primarily used for control systems, nuclear reactors, turbines, etc. The main objectives of this approach are quality matched with productivity, ensuring that the increased costs do not weaken competitiveness. There is also the additional objective of creating an environment in which design, programming, testing, installation and maintenance can be performed in a unified manner. The key in improving quality and productivity is the reuse of software. Dominant traits of the organizational design include a determined effort to make operating work routine, simple and repetitive and to standardize work processes. A representative of this approach would be Toshiba's software factory concept, denoting the company's software division and procedures as they were in 1981 and 1987 respectively. Generic software factory (Europe) This approach was funded under the Eureka program and called the Eureka Software Factory. Participants in this project are large European companies, computer manufacturers, software houses, research institutes and universities. The aim of this approach is to provide the technology, standards, organizational support and other necessary infrastructures in order for software factories to be constructed and tailored from components marketed by independent suppliers. The objective of this approach is to produce an architecture and framework for integrated development environments. The generic software factory develops components and production environments that are part of software factories together with standards and guidance for software components. Experience-based component factory (North America) The experienced-based component factory is developed at the Software Engineering Laboratory at the NASA Goddard Space Flight Center. The goals of this approach are to "understand the software process in a production environment, determine the impact of available technologies and infuse identified/refined methods back into the development process". The approach has been to experiment with new technologies in a production environment, extract and apply experiences and data from experiments and to measure the impact with respect to cost, reliability and quality. This approach puts a heavy emphasis on continuous improvement through understanding the relationship between certain process characteristics and product qualities. The software factory is used to collect data about strengths and weaknesses to set baselines for improvements and to collect experiences to be reused in new projects. Mature software organization (North America) Defined by the Capability Maturity Model, this approach intended to create a framework to achieve a predictable, reliable, and self-improving software development process that produces software of high quality. The strategy consists of step-wise improvements in software organization, defining which processes are key in development. The software process and the software product quality are predictable because they are kept within measurable limits. History The first company to adopt this term was Hitachi in 1969 with its Hitachi Software Works. Later, other companies such as System Development Corporation in 1975, NEC, Toshiba and Fujitsu in 1976 and 1977 followed the same organizational approach. Cusumano suggests that there are six phases for software factories: Basic organization and management structure (mid-1960s to early 1970s) Technology tailoring and standardization (early 1970s to early 1980s) Process mechanization and support (late 1970s) Process refinement and extension (early 1980s) Integrated and flexible automation (mid-1980s) Incremental product / variety improvement (late 1980s) See also Software Product Line Software Lifecycle Processes Software engineering Systems engineering Software development process Automatic programming Domain-Specific Modeling (DSM) Model Driven Engineering (MDE) References External links Harvard Business Review Wipro Technologies: The Factory Model Outsourcing Without Offshoring Is Aim of ‘Software Factory’ By P. J. Connolly Information technology management Software project management
Software factory
[ "Technology" ]
2,153
[ "Information technology", "Information technology management" ]
3,019,969
https://en.wikipedia.org/wiki/Magnetic%20resonance%20angiography
Magnetic resonance angiography (MRA) is a group of techniques based on magnetic resonance imaging (MRI) to image blood vessels. Magnetic resonance angiography is used to generate images of arteries (and less commonly veins) in order to evaluate them for stenosis (abnormal narrowing), occlusions, aneurysms (vessel wall dilatations, at risk of rupture) or other abnormalities. MRA is often used to evaluate the arteries of the neck and brain, the thoracic and abdominal aorta, the renal arteries, and the legs (the latter exam is often referred to as a "run-off"). Acquisition A variety of techniques can be used to generate the pictures of blood vessels, both arteries and veins, based on flow effects or on contrast (inherent or pharmacologically generated). The most frequently applied MRA methods involve the use intravenous contrast agents, particularly those containing gadolinium to shorten the T1 of blood to about 250 ms, shorter than the T1 of all other tissues (except fat). Short-TR sequences produce bright images of the blood. However, many other techniques for performing MRA exist, and can be classified into two general groups: 'flow-dependent' methods and 'flow-independent' methods. Flow-dependent angiography One group of methods for MRA is based on blood flow. Those methods are referred to as flow dependent MRA. They take advantage of the fact that the blood within vessels is flowing to distinguish the vessels from other static tissue. That way, images of the vasculature can be produced. Flow dependent MRA can be divided into different categories: There is phase-contrast MRA (PC-MRA) which utilizes phase differences to distinguish blood from static tissue and time-of-flight MRA (TOF MRA) which exploits that moving spins of the blood experience fewer excitation pulses than static tissue, e.g. when imaging a thin slice. Time-of-flight (TOF) or inflow angiography, uses a short echo time and flow compensation to make flowing blood much brighter than stationary tissue. As flowing blood enters the area being imaged it has seen a limited number of excitation pulses so it is not saturated, this gives it a much higher signal than the saturated stationary tissue. As this method is dependent on flowing blood, areas with slow flow (such as large aneurysms) or flow that is in plane of the image may not be well visualized. This is most commonly used in the head and neck and gives detailed high-resolution images. It is also the most common technique used for routine angiographic evaluation of the intracranial circulation in patients with ischemic stroke. Phase-contrast MRA Phase-contrast (PC-MRA) can be used to encode the velocity of moving blood in the magnetic resonance signal's phase. The most common method used to encode velocity is the application of a bipolar gradient between the excitation pulse and the readout. A bipolar gradient is formed by two symmetric lobes of equal area. It is created by turning on the magnetic field gradient for some time, and then switching the magnetic field gradient to the opposite direction for the same amount of time. By definition, the total area (0th moment) of a bipolar gradient, , is null: (1) The bipolar gradient can be applied along any axis or combination of axes depending on the direction along which flow is to be measured (e.g. x). , the phase accrued during the application of the gradient, is 0 for stationary spins: their phase is unaffected by the application of the bipolar gradient. For spins moving with a constant velocity, , along the direction of the applied bipolar gradient: (2) The accrued phase is proportional to both and the 1st moment of the bipolar gradient, , thus providing a means to estimate . is the Larmor frequency of the imaged spins. To measure , of the MRI signal is manipulated by bipolar gradients (varying magnetic fields) that are preset to a maximum expected flow velocity. An image acquisition that is reverse of the bipolar gradient is then acquired and the difference of the two images is calculated. Static tissues such as muscle or bone will subtract out, however moving tissues such as blood will acquire a different phase since it moves constantly through the gradient, thus also giving its speed of the flow. Since phase-contrast can only acquire flow in one direction at a time, 3 separate image acquisitions in all three directions must be computed to give the complete image of flow. Despite the slowness of this method, the strength of the technique is that in addition to imaging flowing blood, quantitative measurements of blood flow can be obtained. Flow-independent angiography Whereas most of techniques in MRA rely on contrast agents or flow into blood to generate contrast (Contrast Enhanced techniques), there are also non-contrast enhanced flow-independent methods. These methods, as the name suggests, do not rely on flow, but are instead based on the differences of T1, T2 and chemical shift of the different tissues of the voxel. One of the main advantages of this kind of techniques is that we may image the regions of slow flow often found in patients with vascular diseases more easily. Moreover, non-contrast enhanced methods do not require the administration of additional contrast agent, which have been recently linked to nephrogenic systemic fibrosis in patients with chronic kidney disease and kidney failure. Contrast-enhanced magnetic resonance angiography uses injection of MRI contrast agents and is currently the most common method of performing MRA. The contrast medium is injected into a vein, and images are acquired both pre-contrast and during the first pass of the agent through the arteries. By subtraction of these two acquisitions in post-processing, an image is obtained which in principle only shows blood vessels, and not the surrounding tissue. Provided that the timing is correct, this may result in images of very high quality. An alternative is to use a contrast agent that does not, as most agents, leave the vascular system within a few minutes, but remains in the circulation up to an hour (a "blood-pool agent"). Since longer time is available for image acquisition, higher resolution imaging is possible. A problem, however, is the fact that both arteries and veins are enhanced at the same time if higher resolution images are required. Subtractionless contrast-enhanced magnetic resonance angiography: recent developments in MRA technology have made it possible to create high quality contrast-enhanced MRA images without subtraction of a non-contrast enhanced mask image. This approach has been shown to improve diagnostic quality, because it prevents motion subtraction artifacts as well as an increase of image background noise, both direct results of the image subtraction. An important condition for this approach is to have excellent body fat suppression over large image areas, which is possible by using mDIXON acquisition methods. Traditional MRA suppresses signals originating from body fat during the actual image acquisition, which is a method that is sensitive to small deviations in the magnetic and electromagnetic fields and as a result may show insufficient fat suppression in some areas. mDIXON methods can distinguish and accurately separate image signals created by fat or water. By using the 'water images' for MRA scans, virtually no body fat is seen so that no subtraction masks are needed for high quality MR venograms. Non-enhanced magnetic resonance angiography: Since the injection of contrast agents may be dangerous for patients with poor kidney function, others techniques have been developed, which do not require any injection. These methods are based on the differences of T1, T2 and chemical shift of the different tissues of the voxel. A notable non-enhanced method for flow-independent angiography is balanced steady-state free precession (bSSFP) imaging which naturally produces high signal from arteries and veins. 2D and 3D acquisitions For the acquisition of the images two different approaches exist. In general, 2D and 3D images can be acquired. If 3D data is acquired, cross sections at arbitrary view angles can be calculated. Three-dimensional data can also be generated by combining 2D data from different slices, but this approach results in lower quality images at view angles different from the original data acquisition. Furthermore, the 3D data can not only be used to create cross sectional images, but also projections can be calculated from the data. Three-dimensional data acquisition might also be helpful when dealing with complex vessel geometries where blood is flowing in all spatial directions (unfortunately, this case also requires three different flow encodings, one in each spatial direction). Both PC-MRA and TOF-MRA have advantages and disadvantages. PC-MRA has fewer difficulties with slow flow than TOF-MRA and also allows quantitative measurements of flow. PC-MRA shows low sensitivity when imaging pulsating and non-uniform flow. In general, slow blood flow is a major challenge in flow dependent MRA. It causes the differences between the blood signal and the static tissue signal to be small. This either applies to PC-MRA where the phase difference between blood and static tissue is reduced compared to faster flow and to TOF-MRA where the transverse blood magnetization and thus the blood signal are reduced. Contrast agents may be used to increase blood signal – this is especially important for very small vessels and vessels with very small flow velocities that normally show accordingly weak signal. Unfortunately, the use of gadolinium-based contrast media can be dangerous if patients suffer from poor renal function. To avoid these complications as well as eliminate the costs of contrast media, non-enhanced methods have been researched recently. Non-enhanced techniques in development Flow-independent NEMRA methods are not based on flow, but exploit differences in T1, T2 and chemical shift to distinguish blood from static tissue. Gated subtraction fast spin-echo: An imaging technique that subtracts two fast spin echo sequences acquired at systole and diastole. Arteriography is achieved by subtracting the systolic data, where the arteries appear dark, from the diastolic data set, where the arteries appear bright. Requires the use of electrocardiographic gating. Trade names for this technique include Fresh Blood Imaging (Toshiba), TRANCE (Philips), native SPACE (Siemens) and DeltaFlow (GE). 4D dynamic MR angiography (4D-MRA): The first images, before enhancement, serve as a subtraction mask to extract the vascular tree in the succeeding images. Allows the operator to divide arterial and venous phases of a blood-groove with visualisation of its dynamics. Much less time has been spent researching this method so far in comparison with other methods of MRA. BOLD venography or susceptibility weighted imaging (SWI): This method exploits the susceptibility differences between tissues and uses the phase image to detect these differences. The magnitude and phase data are combined (digitally, by an image-processing program) to produce an enhanced contrast magnitude image which is exquisitely sensitive to venous blood, hemorrhage and iron storage. The imaging of venous blood with SWI is a blood-oxygen-level dependent (BOLD) technique which is why it was (and is sometimes still) referred to as BOLD venography. Due to its sensitivity to venous blood SWI is commonly used in traumatic brain injuries (TBI) and for high resolution brain venographies. Similar procedures to flow effect based MRA can be used to image veins. For instance, Magnetic resonance venography (MRV) is achieved by exciting a plane inferiorly while signal is gathered in the plane immediately superior to the excitation plane, and thus imaging the venous blood which has recently moved from the excited plane. Differences in tissue signals, can also be used for MRA. This method is based on the different signal properties of blood compared to other tissues in the body, independent of MR flow effects. This is most successfully done with balanced pulse sequences such as TrueFISP or bTFE. BOLD can also be used in stroke imaging in order to assess the viability of tissue survival. Artifacts MRA techniques in general are sensitive to turbulent flow, which causes a variety of different magnetized proton spins to lose phase coherence (intra-voxel dephasing phenomenon), resulting in a loss of signal. This phenomenon may result in the overestimation of arterial stenosis. Other artifacts observed in MRA include: Phase-contrast MRA: Phase wrapping caused by the underestimation of maximum blood velocity in the image. The fast-moving blood about maximum set velocity for phase-contrast MRA gets aliased and the signal wraps from pi to -pi instead, making flow information unreliable. This can be avoided by using velocity encoding (VENC) values above the maximum measured velocity. It can also be corrected with the so-called phase-unwrapping. Maxwell terms: caused by the switching of the gradients field in the main field B0. This causes the over magnetic field to be distort and give inaccurate phase information for the flow. Acceleration: accelerating blood flow is not properly encoded by phase-contrast technique, which can lead to errors in quantifying blood flow. Time-of-flight MRA: Saturation artifact due to laminar flow: In many vessels, blood flow is slower near the vessel walls than near the center of the vessel. This causes blood near the vessel walls to become saturated and can reduce the apparent caliber of the vessel. Venetian blind artifact: Because the technique acquires images in slabs (as in Multiple overlapping thin-slab acquisition, MOTSA), a non-uniform flip angle across the slab can appear as horizontal stripe in the composed images. Visualization Occasionally, MRA directly produces (thick) slices that contain the entire vessel of interest. More commonly, however, the acquisition results in a stack of slices representing a 3D volume in the body. To display this 3D dataset on a 2D device such as a computer monitor, some rendering method has to be used. The most common method is maximum intensity projection (MIP), where the computer simulates rays through the volume and selects the highest value for display on the screen. The resulting images resemble conventional catheter angiography images. If several such projections are combined into a cine loop or QuickTime VR object, the depth impression is improved, and the observer can get a good perception of 3D structure. An alternative to MIP is direct volume rendering where the MR signal is translated to properties like brightness, opacity and color and then used in an optical model. Clinical use MRA has been successful in studying many arteries in the body, including cerebral and other vessels in the head and neck, the aorta and its major branches in the thorax and abdomen, the renal arteries, and the arteries in the lower limbs. For the coronary arteries, however, MRA has been less successful than CT angiography or invasive catheter angiography. Most often, the underlying disease is atherosclerosis, but medical conditions like aneurysms or abnormal vascular anatomy can also be diagnosed. An advantage of MRA compared to invasive catheter angiography is the non-invasive character of the examination (no catheters have to be introduced in the body). Another advantage, compared to CT angiography and catheter angiography, is that the patient is not exposed to any ionizing radiation. Also, contrast media used for MRI tend to be less toxic than those used for CT angiography and catheter angiography, with fewer people having any risk of allergy. Also far less is needed to be injected into the patient. The greatest drawbacks of the method are its comparatively high cost and its somewhat limited spatial resolution. The length of time the scans take can also be an issue, with CT being far quicker. It is also ruled out in patients for whom MRI exams may be unsafe (such as having a pacemaker or metal in the eyes or certain surgical clips). MRA procedures for visualizing cranial circulation are no different from the positioning for a normal MRI brain. Immobilization within the head coil will be required. MRA is usually a part of the total MRI brain examination and adds approximately 10 minutes to the normal MRI protocol. See also Computed tomography angiography Transcranial doppler sonography References External links Magnetic resonance imaging Vascular procedures
Magnetic resonance angiography
[ "Chemistry" ]
3,382
[ "Nuclear magnetic resonance", "Magnetic resonance imaging" ]
3,020,111
https://en.wikipedia.org/wiki/Plesiadapiformes
Plesiadapiformes ("Adapid-like" or "near Adapiformes") is an extinct basal pan-primates group, as sister to the rest of the pan-primates. The pan-primates together with the Dermoptera form the Primatomorpha. Purgatorius may not be a primate as an extinct sister to the rest of the Dermoptera or a separate, more basal stem pan-primate branch. Even with Purgatorius removed, the crown primates may even have emerged in this group. Plesiadapiformes first appear in the fossil record between 65 and 55 million years ago, although many were extinct by the beginning of the Eocene. They may be the earliest known mammals to have finger nails in place of claws. In 1990, K.C. Beard attempted to link the Plesiadapiformes with the order Dermoptera. They proposed that paromomyid Phenacolemur had digital proportions of the fossil indicated gliding habits similar to that of colugos. In the following simplified cladogram, the crown primates are classified as highly derived Plesiadapiformes, possibly as sister of the Plesiadapoidea. The crown primates are cladistically granted here into the Plesiadapiformes, and "Plesiadapiformes" become a junior synonym of the primates. With this tree, the Plesiadapiformes are not literally extinct (in the sense of having no surviving descendants). The crown primates are also called "Euprimates" in this context. Alternatively, in 2018, the Plesiadapiformes were proposed to be more related to Dermoptera, or roughly corresponding to Primatomorpha, with both Dermoptera and the primates emerging within this group. Also in a 2020 paper, the primates and Dermoptera were jointly considered sister to the plesiadapiform Purgatoriidae, resulting in the following phylogenetic tree. Traditionally, they were regarded as a separate extinct order of Primatomorpha, but it now appears that groups such as the extant primates and/or the Dermoptera have emerged in the group. Similarly, in 2021 the Purgatoriidae were classified as sister to Dermoptera, while the rest of the Plesiadapiformes appear to be sister to the remaining primates: One possible classification table of plesiadapiform families is listed below. Plesiadapiformes Family Micromomyidae Superfamily Paromomyoidea Family Paromomyidae Family Picromomyidae Family Palaechthonidae Family Microsyopidae Superfamily Plesiadapoidea Family Carpolestidae Family Chronolestidae Family Plesiadapidae Family Saxonellidae References External links Mikko's Phylogeny Archive Mammal orders Paleocene first appearances Eocene extinctions Paraphyletic groups
Plesiadapiformes
[ "Biology" ]
611
[ "Phylogenetics", "Paraphyletic groups" ]
3,020,122
https://en.wikipedia.org/wiki/Goal%20programming
Goal programming is a branch of multiobjective optimization, which in turn is a branch of multi-criteria decision analysis (MCDA). It can be thought of as an extension or generalisation of linear programming to handle multiple, normally conflicting objective measures. Each of these measures is given a goal or target value to be achieved. Deviations are measured from these goals both above and below the target. Unwanted deviations from this set of target values are then minimised in an achievement function. This can be a vector or a weighted sum dependent on the goal programming variant used. As satisfaction of the target is deemed to satisfy the decision maker(s), an underlying satisficing philosophy is assumed. Goal programming is used to perform three types of analysis: Determine the required resources to achieve a desired set of objectives. Determine the degree of attainment of the goals with the available resources. Providing the best satisfying solution under a varying amount of resources and priorities of the goals. History Goal programming was first used by Charnes, Cooper and Ferguson in 1955, although the actual name first appeared in a 1961 text by Charnes and Cooper. Seminal works by Lee, Ignizio, Ignizio and Cavalier, and Romero followed. Schniederjans gives in a bibliography of a large number of pre-1995 articles relating to goal programming, and Jones and Tamiz give an annotated bibliography of the period 1990-2000. A recent textbook by Jones and Tamiz . gives a comprehensive overview of the state-of-the-art in goal programming. The first engineering application of goal programming, due to Ignizio in 1962, was the design and placement of the antennas employed on the second stage of the Saturn V. This was used to launch the Apollo space capsule that landed the first men on the moon. Variants The initial goal programming formulations ordered the unwanted deviations into a number of priority levels, with the minimisation of a deviation in a higher priority level being infinitely more important than any deviations in lower priority levels. This is known as lexicographic or pre-emptive goal programming. Ignizio gives an algorithm showing how a lexicographic goal programme can be solved as a series of linear programmes. Lexicographic goal programming is used when there exists a clear priority ordering amongst the goals to be achieved. If the decision maker is more interested in direct comparisons of the objectives then weighted or non-pre-emptive goal programming should be used. In this case, all the unwanted deviations are multiplied by weights, reflecting their relative importance, and added together as a single sum to form the achievement function. Deviations measured in different units cannot be summed directly due to the phenomenon of incommensurability. Hence each unwanted deviation is multiplied by a normalisation constant to allow direct comparison. Popular choices for normalisation constants are the goal target value of the corresponding objective (hence turning all deviations into percentages) or the range of the corresponding objective (between the best and the worst possible values, hence mapping all deviations onto a zero-one range). For decision makers more interested in obtaining a balance between the competing objectives, Chebyshev goal programming is used. Introduced by Flavell in 1976, this variant seeks to minimise the maximum unwanted deviation, rather than the sum of deviations. This utilises the Chebyshev distance metric. Strengths and weaknesses A major strength of goal programming is its simplicity and ease of use. This accounts for the large number of goal programming applications in many and diverse fields. Linear goal programmes can be solved using linear programming software as either a single linear programme, or in the case of the lexicographic variant, a series of connected linear programmes. Goal programming can hence handle relatively large numbers of variables, constraints and objectives. A debated weakness is the ability of goal programming to produce solutions that are not Pareto efficient. This violates a fundamental concept of decision theory, that no rational decision maker will knowingly choose a solution that is not Pareto efficient. However, techniques are available to detect when this occurs and project the solution onto the Pareto efficient solution in an appropriate manner. The setting of appropriate weights in the goal programming model is another area that has caused debate, with some authors suggesting the use of the analytic hierarchy process or interactive methods for this purpose. Also, the weights of the objective functions can be calculated based on their preference using the ordinal priority approach. See also Decision-making software External links LiPS — Free easy-to-use GUI program intended for solving linear, integer and goal programming problems. LINSOLVE - Free Windows command-line window linear programming and linear goal programming] References Mathematical optimization Multiple-criteria decision analysis Goal de:Entscheidung unter Sicherheit#Zielprogrammierung
Goal programming
[ "Mathematics" ]
981
[ "Mathematical optimization", "Mathematical analysis" ]
3,020,267
https://en.wikipedia.org/wiki/List%20of%20video%20game%20designers
This is a list of notable video game designers, past and present, in alphabetical order. The people in this list already have Wikipedia entries, and as such did significant design for notable computer games, console games, or arcade games. It does not include people in managerial roles (which often includes titles like "Producer" or "Development Director") or people who developed a concept without doing actual design work on the game itself (sometimes applicable to "co-creator" or "creator" roles). Just because a game is listed next to a designer's name does not imply that person was the sole designer. As with films, the credits for video games can be complicated. A Allen Adham: World of Warcraft. Michel Ancel: Rayman, Beyond Good & Evil. Ed Annunziata: Ecco the Dolphin, Vectorman, Kolibri, Mr. Bones, Mort the Chicken Chris Avellone: Fallout 2, Planescape: Torment, Icewind Dale, Star Wars Knights of the Old Republic II: The Sith Lords, Fallout: New Vegas B Ralph Baer: "Father of Video Games," created Chase (1967), the first game played in a television set. Clive Barker: Undying, Jericho. Richard Bartle: co-author of MUD, the first multi-user dungeon. Arnab Basu: Tomb Raider series and Batman: Arkham Asylum Chris Beatrice: Caesar, Lords of the Realm. Seamus Blackley: Flight Unlimited, Ultima Underworld, System Shock, and Trespasser Marc Blank: Zork Cliff Bleszinski: Gears of War, Gears of War 2, & LawBreakers Jonathan Blow: Braid, The Witness. Ed Boon: Mortal Kombat Brenda Brathwaite: Wizardry 8 Bill Budge: Raster Blaster, Pinball Construction Set Danielle Bunten Berry: M.U.L.E., The Seven Cities of Gold Eric Barone: Stardew Valley C Tim Cain: Fallout, Arcanum Rich Carlson: Strange Adventures in Infinite Space, Weird Worlds: Return to Infinite Space Charles Cecil: Broken Sword, Beyond a Steel Sky Mark Cerny: Marble Madness, Knack Eric Chahi: Another World, Heart of Darkness Trevor Chan: Capitalism, Capitalism II, Seven Kingdoms, Seven Kingdoms II: The Fryhtan Wars, Bad Day L.A., Restaurant Empire Doug Church: Ultima Underworld, Ultima Underworld 2, System Shock Lori and Corey Cole: Quest for Glory series, Mixed-Up Fairy Tales, Castle of Dr. Brain Chris Crawford: Eastern Front, Balance of Power Scott Cawthon: Five Nights at Freddy's D Don Daglow: Dungeon, Intellivision Utopia, Earl Weaver Baseball, Neverwinter Nights Patrice Désilets: Assassin's Creed. Dino Dini: Kick Off, Kick Off 2, Player Manager, GOAL!, Dino Dini's Soccer Neil Druckmann: The Last of Us, The Last of Us Part II, Uncharted 2: Among Thieves, Uncharted 4: A Thief's End Jakub Dvorsky: Samorost, Machinarium. F Josef Fares: Brothers: A Tale of Two Sons, A Way Out, It Takes Two Brian Fargo: Bard's Tale, Wasteland Steve Fawkner: Warlords, Puzzle Quest Steve Feak: DotA Allstars, League of Legends Kelton Flinn: Air Warrior David Fox: Zak McKracken and the Alien Mindbenders Toby Fox: Undertale and Deltarune František Fuka: Tetris 2 Tokuro Fujiwara: Ghosts 'n Goblins Rob Fulop: Demon Attack, Cosmic Ark, Night Trap G Toby Gard: Tomb Raider Richard Garriott: Ultima Andy Gavin: Crash Bandicoot, Crash Bandicoot 2: Cortex Strikes Back, Crash Bandicoot: Warped, Crash Team Racing, Jak and Daxter: The Precursor Legacy, Jak II, Jak 3 Ron Gilbert: Maniac Mansion, Monkey Island Julian Gollop: Chaos, Laser Squad, X-COM: UFO Defense. Brian Green: Meridian 59 Stefano Gualeni: Tony Tough H Dean Hall: DayZ Jon Hare: Sensible Soccer, Cannon Fodder, Wizball Stieg Hedlund: Diablo, Diablo II, StarCraft Amy Hennig: Uncharted: Drake's Fortune, Uncharted 2: Among Thieves, Uncharted 3: Drake's Deception, Legacy of Kain: Soul Reaver William Higinbotham: Tennis for Two Yuji Horii: Dragon Quest, Chrono Trigger Todd Howard: Elder Scrolls, Fallout 3 Casey Hudson: Mass Effect 1-3, Star Wars: Knights of the Old Republic I IceFrog: Defense of the Ancients, Dota 2 Takashi Iizuka Tomonobu Itagaki: Dead or Alive, Ninja Gaiden, Devil's Third Shigesato Itoi: Mother Tōru Iwatani: Pac-Man, Pole Position J David Jaffe: God of War, Twisted Metal Jennell Jaquays: leader of game design for Coleco in the 1980s, designer and level designer for Quake 2, Quake III Arena Eugene Jarvis: Defender, Robotron: 2084 Jane Jensen: Gabriel Knight series, Gray Matter Soren Johnson: Civilization III, Civilization IV David Jones: Lemmings, Grand Theft Auto K Josef Kates, designer of Bertie the Brain Iikka Keränen, co-designer of Strange Adventures in Infinite Space, Weird Worlds: Return to Infinite Space Takeshi Kitano: Takeshi's Challenge Rieko Kodama: Phantasy Star series, Skies of Arcadia Hideo Kojima: Metal Gear series, Snatcher, Policenauts, and Death Stranding. Jarek Kolář: Vietcong. Kowloon Kurosawa: Hong Kong 97 L Marc Laidlaw: Half-Life, Half-Life 2 Ken Levine: BioShock, Thief: The Dark Project Ken Lobb: GoldenEye 007 Ed Logg: Asteroids, Centipede, Gauntlet Gilman Louie: Falcon, Super Tetris, Battle Trek Al Lowe: Leisure Suit Larry M Gregg Mayles: Banjo-Kazooie series, Viva Piñata American McGee: American McGee's Alice, Doom, Quake, American McGee's Grimm Edmund McMillen: Gish, Aether, Coil, Spewer, The Basement Collection, Super Meat Boy, The Binding of Isaac, The Binding of Isaac: Rebirth, The End Is Nigh, The Legend of Bum-bo Colin McComb: Planescape: Torment, Torment: Tides of Numenera Brad McQuaid: EverQuest Jordan Mechner: Karateka, Prince of Persia Sid Meier: Civilization, Railroad Tycoon Steve Meretzky: Planetfall, The Hitchhiker's Guide to the Galaxy, A Mind Forever Voyaging, Leather Goddesses of Phobos Shinji Mikami: Resident Evil Robyn Miller, Rand Miller: Myst Jeff Minter: Tempest 2000, Gridrunner Shigeru Miyamoto: Donkey Kong, Mario, Legend of Zelda Hidetaka Miyazaki: Dark Souls, Bloodborne, Sekiro, Elden Ring Tetsuya Mizuguchi: Lumines, Rez, Space Channel 5 Peter Molyneux: Populous, Syndicate, Black and White, Fable Brian Moriarty: Trinity, Loom David Mullich: The Prisoner, I Have No Mouth and I Must Scream N Yuji Naka: Sonic the Hedgehog, Phantasy Star Online, Nights into Dreams Doug Neubauer: Star Raiders, Solaris Garry Newman: Garry's Mod, Rust Gabe Newell: Half-Life (series) Toshihiro Nishikado: Space Invaders, Speed Race, Gun Fight Tetsuya Nomura: Final Fantasy, Kingdom Hearts O Yoshiki Okamoto: Street Fighter Scott Orr: Madden NFL P Alexey Pajitnov: Tetris Rob Pardo: World of Warcraft, Warcraft III David Perry: MDK, Earthworm Jim, Wild 9, Enter the Matrix Markus Persson: Minecraft Sandy Petersen: Lightspeed, Doom, Rise of Rome Simon Phipps: Rick Dangerous, ShadowMan, Harry Potter and the Philosopher's Stone Randy Pitchford: Borderlands, Brothers in Arms, Half-Life William Pugh: The Stanley Parable R Robert Topala :Geometry Dash Rick Raymer: Clue, Scooby Doo: Mystery of the Fun Park Phantom Frédérick Raynal: Alone in the Dark, Little Big Adventure, Toy Commander Paul Reiche III: World Tour Golf, Strange Adventures in Infinite Space, Mail Order Monsters, the Star Control series, the Archon series, the Starflight series, and the Skylanders series Tommy Refenes: Super Meat Boy Brian Reynolds: Civilization II, Sid Meier's Alpha Centauri, Rise of Nations and FrontierVille Chris Roberts: Wing Commander, Star Citizen Warren Robinett: Adventure, Rocky's Boots, & Robot Odyssey Scott Rogers: Maximo: Ghosts to Glory, Maximo vs. Army of Zin, God of War Ken Rolston: The Elder Scrolls (Morrowind and Oblivion) John Romero: Doom, Quake, Daikatana Jason Rubin: Crash Bandicoot, Crash Bandicoot 2: Cortex Strikes Back, Crash Bandicoot: Warped, Crash Team Racing, Jak and Daxter: The Precursor Legacy, Jak II, Jak 3 S Yoot Saito: Seaman, Odama, The Tower SP Hironobu Sakaguchi: Mistwalker, Final Fantasy series, Chrono Trigger Masahiro Sakurai: Kirby, Super Smash Bros. Kevin Saunders: Torment: Tides of Numenera, Neverwinter Nights 2: Mask of the Betrayer Chris Sawyer: Transport Tycoon, RollerCoaster Tycoon. Josh Sawyer: Neverwinter Nights 2, Icewind Dale, Baldur's Gate: Dark Alliance Tim Schafer: Grim Fandango, Psychonauts Jesse Schell: Toontown Online, Pixie Hollow Glen Schofield: Dead Space Laura Shigihara: Rakuen Ryan Shwayder: EverQuest II Jeremiah Slaczka: Scribblenauts, Drawn to Life Doug Smith: Lode Runner Harvey Smith: Deux Ex, more Warren Spector: System Shock, Thief, Deus Ex Tim & Chris Stamper: Wizards & Warriors, Battletoads, Donkey Kong Country, Donkey Kong Country 2: Diddy's Kong Quest & Donkey Kong Country 3: Dixie Kong's Double Trouble! Bruce Straley: Uncharted 2: Among Thieves, Uncharted 4: A Thief's End, The Last of Us Goichi Suda: Killer7, No More Heroes Yu Suzuki: Afterburner, Hang-On, Virtua Racing, Virtua Fighter, Ferrari F355 Challenge, Shenmue, Out Run Kim Swift: Portal David Sirlin: Super Street Fighter II Turbo HD Remix T Satoshi Tajiri: Pokémon Toshiro Tsuchida: Front Mission, Arc The Lad John Tobias: Mortal Kombat Chris Taylor, Total Annihilation Andy Tudor: Shift 2: Unleashed Yoko Taro: Drakengard, Nier, Nier: Automata U Fumito Ueda: Ico, Shadow of the Colossus, The Last Guardian V Jon Van Caneghem: Might and Magic, Heroes of Might and Magic Daniel Vávra: Mafia: The City of Lost Heaven, Kingdom Come: Deliverance Swen Vincke: Divinity, Baldur's Gate 3 W Robin Walker: Team Fortress, Team Fortress 2, Half-Life: Alyx Tony Warriner: Beneath A Steel Sky, Obsidian (1986 video game), Broken Sword1 Christopher Weaver: Gridiron! Jordan Weisman: BattleTech, MechWarrior Richard Vander Wende: Riven Evan Wells: Gex: Enter the Gecko, Crash Bandicoot: Warped, Crash Team Racing, Jak and Daxter: The Precursor Legacy, Jak II, Jak 3 Bill Williams: Necromancer, Alley Cat, Mind Walker Roberta Williams: King's Quest Tim Willits: Quake, Quake II, Quake III Arena, Quake III: Team Arena, Doom 3 Gary Winnick: Maniac Mansion, Thimbleweed Park Will Wright: SimCity, The Sims, Spore Y Kazunori Yamauchi: Gran Turismo Gunpei Yokoi: Metroid, Kid Icarus. See also List of video game industry people External links The Giant List of Classic Game Programmers which concentrates on 8-bit era game programmers who were usually also the game's designer Video game designers +Designers
List of video game designers
[ "Technology" ]
2,639
[ "Computing-related lists", "Video game lists" ]
3,021,101
https://en.wikipedia.org/wiki/Class%20number%20formula
In number theory, the class number formula relates many important invariants of an algebraic number field to a special value of its Dedekind zeta function. General statement of the class number formula We start with the following data: is a number field. , where denotes the number of real embeddings of , and is the number of complex embeddings of . is the Dedekind zeta function of . is the class number, the number of elements in the ideal class group of . is the regulator of . is the number of roots of unity contained in . is the discriminant of the extension . Then: Theorem (Class Number Formula). converges absolutely for and extends to a meromorphic function defined for all complex with only one simple pole at , with residue This is the most general "class number formula". In particular cases, for example when is a cyclotomic extension of , there are particular and more refined class number formulas. Proof The idea of the proof of the class number formula is most easily seen when K = Q(i). In this case, the ring of integers in K is the Gaussian integers. An elementary manipulation shows that the residue of the Dedekind zeta function at s = 1 is the average of the coefficients of the Dirichlet series representation of the Dedekind zeta function. The n-th coefficient of the Dirichlet series is essentially the number of representations of n as a sum of two squares of nonnegative integers. So one can compute the residue of the Dedekind zeta function at s = 1 by computing the average number of representations. As in the article on the Gauss circle problem, one can compute this by approximating the number of lattice points inside of a quarter circle centered at the origin, concluding that the residue is one quarter of pi. The proof when K is an arbitrary imaginary quadratic number field is very similar. In the general case, by Dirichlet's unit theorem, the group of units in the ring of integers of K is infinite. One can nevertheless reduce the computation of the residue to a lattice point counting problem using the classical theory of real and complex embeddings and approximate the number of lattice points in a region by the volume of the region, to complete the proof. Dirichlet class number formula Peter Gustav Lejeune Dirichlet published a proof of the class number formula for quadratic fields in 1839, but it was stated in the language of quadratic forms rather than classes of ideals. It appears that Gauss already knew this formula in 1801. This exposition follows Davenport. Let d be a fundamental discriminant, and write h(d) for the number of equivalence classes of quadratic forms with discriminant d. Let be the Kronecker symbol. Then is a Dirichlet character. Write for the Dirichlet L-series based on . For d > 0, let t > 0, u > 0 be the solution to the Pell equation for which u is smallest, and write (Then is either a fundamental unit of the real quadratic field or the square of a fundamental unit.) For d < 0, write w for the number of automorphisms of quadratic forms of discriminant d; that is, Then Dirichlet showed that This is a special case of Theorem 1 above: for a quadratic field K, the Dedekind zeta function is just , and the residue is . Dirichlet also showed that the L-series can be written in a finite form, which gives a finite form for the class number. Suppose is primitive with prime conductor . Then Galois extensions of the rationals If K is a Galois extension of Q, the theory of Artin L-functions applies to . It has one factor of the Riemann zeta function, which has a pole of residue one, and the quotient is regular at s = 1. This means that the right-hand side of the class number formula can be equated to a left-hand side Π L(1,ρ)dim ρ with ρ running over the classes of irreducible non-trivial complex linear representations of Gal(K/Q) of dimension dim(ρ). That is according to the standard decomposition of the regular representation. Abelian extensions of the rationals This is the case of the above, with Gal(K/Q) an abelian group, in which all the ρ can be replaced by Dirichlet characters (via class field theory) for some modulus f called the conductor. Therefore all the L(1) values occur for Dirichlet L-functions, for which there is a classical formula, involving logarithms. By the Kronecker–Weber theorem, all the values required for an analytic class number formula occur already when the cyclotomic fields are considered. In that case there is a further formulation possible, as shown by Kummer. The regulator, a calculation of volume in 'logarithmic space' as divided by the logarithms of the units of the cyclotomic field, can be set against the quantities from the L(1) recognisable as logarithms of cyclotomic units. There result formulae stating that the class number is determined by the index of the cyclotomic units in the whole group of units. In Iwasawa theory, these ideas are further combined with Stickelberger's theorem. See also Brumer–Stark conjecture Smith–Minkowski–Siegel mass formula Notes References Algebraic number theory Quadratic forms
Class number formula
[ "Mathematics" ]
1,143
[ "Quadratic forms", "Algebraic number theory", "Number theory" ]
3,021,207
https://en.wikipedia.org/wiki/Current%20%28mathematics%29
In mathematics, more particularly in functional analysis, differential topology, and geometric measure theory, a k-current in the sense of Georges de Rham is a functional on the space of compactly supported differential k-forms, on a smooth manifold M. Currents formally behave like Schwartz distributions on a space of differential forms, but in a geometric setting, they can represent integration over a submanifold, generalizing the Dirac delta function, or more generally even directional derivatives of delta functions (multipoles) spread out along subsets of M. Definition Let denote the space of smooth m-forms with compact support on a smooth manifold A current is a linear functional on which is continuous in the sense of distributions. Thus a linear functional is an m-dimensional current if it is continuous in the following sense: If a sequence of smooth forms, all supported in the same compact set, is such that all derivatives of all their coefficients tend uniformly to 0 when tends to infinity, then tends to 0. The space of m-dimensional currents on is a real vector space with operations defined by Much of the theory of distributions carries over to currents with minimal adjustments. For example, one may define the support of a current as the complement of the biggest open set such that whenever The linear subspace of consisting of currents with support (in the sense above) that is a compact subset of is denoted Homological theory Integration over a compact rectifiable oriented submanifold M (with boundary) of dimension m defines an m-current, denoted by : If the boundary ∂M of M is rectifiable, then it too defines a current by integration, and by virtue of Stokes' theorem one has: This relates the exterior derivative d with the boundary operator ∂ on the homology of M. In view of this formula we can define a boundary operator on arbitrary currents via duality with the exterior derivative by for all compactly supported m-forms Certain subclasses of currents which are closed under can be used instead of all currents to create a homology theory, which can satisfy the Eilenberg–Steenrod axioms in certain cases. A classical example is the subclass of integral currents on Lipschitz neighborhood retracts. Topology and norms The space of currents is naturally endowed with the weak-* topology, which will be further simply called weak convergence. A sequence of currents, converges to a current if It is possible to define several norms on subspaces of the space of all currents. One such norm is the mass norm. If is an m-form, then define its comass by So if is a simple m-form, then its mass norm is the usual L∞-norm of its coefficient. The mass of a current is then defined as The mass of a current represents the weighted area of the generalized surface. A current such that M(T) < ∞ is representable by integration of a regular Borel measure by a version of the Riesz representation theorem. This is the starting point of homological integration. An intermediate norm is Whitney's flat norm, defined by Two currents are close in the mass norm if they coincide away from a small part. On the other hand, they are close in the flat norm if they coincide up to a small deformation. Examples Recall that so that the following defines a 0-current: In particular every signed regular measure is a 0-current: Let (x, y, z) be the coordinates in Then the following defines a 2-current (one of many): See also Georges de Rham Herbert Federer Differential geometry Varifold Notes References . Differential topology Functional analysis Generalized functions Generalized manifolds Schwartz distributions
Current (mathematics)
[ "Mathematics" ]
743
[ "Functions and mappings", "Functional analysis", "Mathematical objects", "Topology", "Mathematical relations", "Differential topology" ]
3,021,212
https://en.wikipedia.org/wiki/Ingress%20filtering
In computer networking, ingress filtering is a technique used to ensure that incoming packets are actually from the networks from which they claim to originate. This can be used as a countermeasure against various spoofing attacks where the attacker's packets contain fake IP addresses. Spoofing is often used in denial-of-service attacks, and mitigating these is a primary application of ingress filtering. Problem Networks receive packets from other networks. Normally a packet will contain the IP address of the computer that originally sent it. This allows devices in the receiving network to know where it came from, allowing a reply to be routed back (amongst other things), except when IP addresses are used through a proxy or a spoofed IP address, which does not pinpoint a specific user within that pool of users. A sender IP address can be faked (spoofed), characterizing a spoofing attack. This disguises the origin of packets sent, for example in a denial-of-service attack. The same holds true for proxies, although in a different manner than IP spoofing. Potential solutions One potential solution involves implementing the use of intermediate Internet gateways (i.e., those servers connecting disparate networks along the path followed by any given packet) filtering or denying any packet deemed to be illegitimate. The gateway processing the packet might simply ignore the packet completely, or where possible, it might send a packet back to the sender relaying a message that the illegitimate packet has been denied. Host intrusion prevention systems (HIPS) are one example of technical engineering applications that help to identify, prevent and/or deter unwanted, unsuspected or suspicious events and intrusions. Any router that implements ingress filtering checks the source IP field of IP packets it receives and drops packets if the packets don't have an IP address in the IP address block to which the interface is connected. This may not be possible if the end host is multi-homed and also sends transit network traffic. In ingress filtering, packets coming into the network are filtered if the network sending it should not send packets from the originating IP address(es). If the end host is a stub network or host, the router needs to filter all IP packets that have, as the source IP, private addresses (RFC 1918), bogon addresses or addresses that do not have the same network address as the interface. Networks Network ingress filtering is a packet filtering technique used by many Internet service providers to try to prevent IP address spoofing of Internet traffic, and thus indirectly combat various types of net abuse by making Internet traffic traceable to its source. Network ingress filtering makes it much easier to track denial-of-service attacks to their source(s) so they can be fixed. Network ingress filtering is a good neighbor policy that relies on cooperation between ISPs for their mutual benefit. The best current practices for network ingress filtering are documented by the Internet Engineering Task Force in BCP 38 and 84, which are defined by RFC 2827 and RFC 3704, respectively. BCP 84 recommends that upstream providers of IP connectivity filter packets entering their networks from downstream customers, and discard any packets which have a source address that is not allocated to that customer. There are many possible ways of implementing this policy; one common mechanism is to enable reverse-path forwarding on links to customers, which will indirectly apply this policy based on the provider's route filtering of their customers' route announcements. Deployment As of 2012, one report suggests that, contrary to general opinion about the lack of BCP 38 deployment, some 80% of the Internet (by various measures) were already applying anti-spoofing packet filtering in their networks. At least one computer security expert is in favor of passing a law requiring 100% of all ISPs to implement network ingress filtering as defined in IETF BCP 38. In the US, presumably the FCC would enforce this law. See also Egress filtering Ingress cancellation Prefix hijacking References External links - Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoofing (BCP 38) Ingress Filtering for Multihomed Networks (BCP 84) Information on BCP 38 » RFC Editor Information on BCP 84 » RFC Editor Routing MANRS Computer network security
Ingress filtering
[ "Engineering" ]
887
[ "Cybersecurity engineering", "Computer networks engineering", "Computer network security" ]
3,021,223
https://en.wikipedia.org/wiki/Dichotomic%20search
In computer science, a dichotomic search is a search algorithm that operates by selecting between two distinct alternatives (dichotomies or polychotomies when they are more than two) at each step. It is a specific type of divide and conquer algorithm. A well-known example is binary search. Abstractly, a dichotomic search can be viewed as following edges of an implicit binary tree structure until it reaches a leaf (a goal or final state). This creates a theoretical tradeoff between the number of possible states and the running time: given k comparisons, the algorithm can only reach O(2k) possible states and/or possible goals. Some dichotomic searches only have results at the leaves of the tree, such as the Huffman tree used in Huffman coding, or the implicit classification tree used in Twenty Questions. Other dichotomic searches also have results in at least some internal nodes of the tree, such as a dichotomic search table for Morse code. There is thus some looseness in the definition. Though there may indeed be only two paths from any node, there are thus three possibilities at each step: choose one onwards path or the other, or stop at this node. Dichotomic searches are often used in repair manuals, sometimes graphically illustrated with a flowchart similar to a fault tree. See also Binary search algorithm External links Python Program for Binary Search (Recursive and Iterative) Binary Search Search algorithms References
Dichotomic search
[ "Technology" ]
306
[ "Computing stubs", "Computer science", "Computer science stubs" ]
3,021,393
https://en.wikipedia.org/wiki/Length%20function
In the mathematical field of geometric group theory, a length function is a function that assigns a number to each element of a group. Definition A length function L : G → R+ on a group G is a function satisfying: Compare with the axioms for a metric and a filtered algebra. Word metric An important example of a length is the word metric: given a presentation of a group by generators and relations, the length of an element is the length of the shortest word expressing it. Coxeter groups (including the symmetric group) have combinatorially important length functions, using the simple reflections as generators (thus each simple reflection has length 1). See also: length of a Weyl group element. A longest element of a Coxeter group is both important and unique up to conjugation (up to different choice of simple reflections). Properties A group with a length function does not form a filtered group, meaning that the sublevel sets do not form subgroups in general. However, the group algebra of a group with a length functions forms a filtered algebra: the axiom corresponds to the filtration axiom. References Group theory Geometric group theory
Length function
[ "Physics", "Mathematics" ]
236
[ "Geometric group theory", "Group actions", "Group theory", "Fields of abstract algebra", "Symmetry" ]
3,021,435
https://en.wikipedia.org/wiki/Primary%20ideal
In mathematics, specifically commutative algebra, a proper ideal Q of a commutative ring A is said to be primary if whenever xy is an element of Q then x or yn is also an element of Q, for some n > 0. For example, in the ring of integers Z, (pn) is a primary ideal if p is a prime number. The notion of primary ideals is important in commutative ring theory because every ideal of a Noetherian ring has a primary decomposition, that is, can be written as an intersection of finitely many primary ideals. This result is known as the Lasker–Noether theorem. Consequently, an irreducible ideal of a Noetherian ring is primary. Various methods of generalizing primary ideals to noncommutative rings exist, but the topic is most often studied for commutative rings. Therefore, the rings in this article are assumed to be commutative rings with identity. Examples and properties The definition can be rephrased in a more symmetric manner: a proper ideal is primary if, whenever , we have or or . (Here denotes the radical of .) A proper ideal Q of R is primary if and only if every zero divisor in R/Q is nilpotent. (Compare this to the case of prime ideals, where P is prime if and only if every zero divisor in R/P is actually zero.) Any prime ideal is primary, and moreover an ideal is prime if and only if it is primary and semiprime (also called radical ideal in the commutative case). Every primary ideal is primal. If Q is a primary ideal, then the radical of Q is necessarily a prime ideal P, and this ideal is called the associated prime ideal of Q. In this situation, Q is said to be P-primary. On the other hand, an ideal whose radical is prime is not necessarily primary: for example, if , , and , then is prime and , but we have , , and for all n > 0, so is not primary. The primary decomposition of is ; here is -primary and is -primary. An ideal whose radical is maximal, however, is primary. Every ideal with radical is contained in a smallest -primary ideal: all elements such that for some . The smallest -primary ideal containing is called the th symbolic power of . If P is a maximal prime ideal, then any ideal containing a power of P is P-primary. Not all P-primary ideals need be powers of P, but at least they contain a power of P; for example the ideal (x, y2) is P-primary for the ideal P = (x, y) in the ring k[x, y], but is not a power of P, however it contains P². If A is a Noetherian ring and P a prime ideal, then the kernel of , the map from A to the localization of A at P, is the intersection of all P-primary ideals. A finite nonempty product of -primary ideals is -primary but an infinite product of -primary ideals may not be -primary; since for example, in a Noetherian local ring with maximal ideal , (Krull intersection theorem) where each is -primary, for example the infinite product of the maximal (and hence prime and hence primary) ideal of the local ring yields the zero ideal, which in this case is not primary (because the zero divisor is not nilpotent). In fact, in a Noetherian ring, a nonempty product of -primary ideals is -primary if and only if there exists some integer such that . Footnotes References On primal ideals, Ladislas Fuchs External links Primary ideal at Encyclopaedia of Mathematics Commutative algebra Ideals (ring theory)
Primary ideal
[ "Mathematics" ]
791
[ "Fields of abstract algebra", "Commutative algebra" ]
3,021,657
https://en.wikipedia.org/wiki/Icemaker
An icemaker, ice generator, or ice machine may refer to either a consumer device for making ice, found inside a home freezer; a stand-alone appliance for making ice, or an industrial machine for making ice on a large scale. The term "ice machine" usually refers to the stand-alone appliance. The ice generator is the part of the ice machine that actually produces the ice. This would include the evaporator and any associated drives/controls/subframe that are directly involved with making and ejecting the ice into storage. When most people refer to an ice generator, they mean this ice-making subsystem alone, minus refrigeration. An ice machine, however, particularly if described as 'packaged', would typically be a complete machine including refrigeration, controls, and dispenser, requiring only connection to power and water supplies. The term icemaker is more ambiguous, with some manufacturers describing their packaged ice machine as an icemaker, while others describe their generators in this way. History In 1748, the first known artificial refrigeration was demonstrated by William Cullen at the University of Glasgow. Mr. Cullen never used his discovery for any practical purposes. This may be the reason why the history of the icemakers begins with Oliver Evans, an American inventor who designed the first refrigeration machine in 1805. In 1834, Jacob Perkins built the first practical refrigerating machine using ether in a vapor compression cycle. The American inventor, mechanical engineer and physicist received 21 American and 19 English patents (for innovations in steam engines, the printing industry and gun manufacturing among others) and is considered today the father of the refrigerator. In 1844, an American physician, John Gorrie, built a refrigerator based on Oliver Evans' design to make ice to cool the air for his yellow fever patients. His plans date back to 1842, making him one of the founding fathers of the refrigerator. Unfortunately for John Gorrie, his plans of manufacturing and selling his invention were met with fierce opposition by Frederic Tudor, the Boston “Ice King”. By then, Tudor was shipping ice from the United States to Cuba and was planning to expand his business to India. Fearing that Gorrie’s invention would ruin his business, he began a smear campaign against the inventor. In 1851, John Gorrie was awarded U.S. Patent 8080 for an ice machine. After struggling with Tudor's campaign and the death of his partner, John Gorrie also died, bankrupt and humiliated. His original icemaker plans and the prototype machine are held today at the National Museum of American History, Smithsonian Institution in Washington, D.C. In 1853, Alexander Twining was awarded U.S. Patent 10221 for an icemaker. Twining’s experiments led to the development of the first commercial refrigeration system, built in 1856. He also established the first artificial method of producing ice. Just like Perkins before him, James Harrison started experimenting with ether vapor compression. In 1854, James Harrison successfully built a refrigeration machine capable of producing 3,000 kilograms of ice per day and in 1855 he received an icemaker patent in Australia, similar to that of Alexander Twining. Harrison continued his experiments with refrigeration. Today he is credited for his major contributions to the development of modern cooling system designs and functionality strategies. These systems were later used to ship refrigerated meat across the globe. In 1867, Andrew Muhl built an ice-making machine in San Antonio, Texas, to help service the expanding beef industry before moving it to Waco in 1871. In 1873, the patent for this machine was contracted by the Columbus Iron Works, which produced the world's first commercial icemakers. William Riley Brown served as its president and George Jasper Golden served as its superintendent. In 1876, German engineer Carl von Linde patented the process of liquefying gas that would later become an important part of basic refrigeration technology (U.S. Patent 1027862). In 1879 and 1891, two African American inventors patented improved refrigerator designs in the United States (Thomas Elkins – U.S. patent #221222 and respectively John Standard – U.S. patent #455891). In 1902, the Teague family of Montgomery purchased control of the firm. Their last advertisement in Ice and Refrigeration appeared in March 1904. In 1925, controlling interest in the Columbus Iron Works passed from the Teague family to W.C. Bradely of W.C. Bradley, Co. Jurgen Hans is credited with the invention of the first ice machine to produce edible ice in 1929. In 1932 he founded a company called Kulinda and started manufacturing edible ice, but by 1949 the business switched its central product from ice to central air conditioning. The ice machines from the late 1800s to the 1930s used toxic gases such as ammonia (NH3), methyl chloride (CH3Cl), and sulfur dioxide (SO2) as refrigerants. During the 1920s, several fatal accidents were registered. They were caused by the refrigerators leaking methyl chloride. In the quest of replacing dangerous refrigerants – especially methyl chloride – collaborative research ensued in American corporations. The result of this research was the discovery of Freon. In 1930, General Motors and DuPont formed Kinetic Chemicals to produce Freon, which would later become the standard for almost all consumer and industrial refrigerators. The original "Freon" produced at this time was chlorofluorocarbon, a moderately toxic gas causing ozone depletion. Principle of ice making All refrigeration equipment is made of four key components; the evaporator, the condenser, the compressor and the throttle valve. Ice machines all work the same way. The function of the compressor is to compress low-pressure refrigerant vapor to high-pressure vapor, and deliver it to the condenser. Here, the high-pressure vapor is condensed into high-pressure liquid, and drained out through the throttle valve to become low-pressure liquid. At this point, the liquid is conducted to the evaporator, where heat exchanging occurs, and ice is created. This is one complete refrigeration cycle. Consumer icemakers Freezer icemakers Automatic icemakers for the home were first offered by the Servel company around 1953. They are usually found inside the freezer compartment of a refrigerator. They produce crescent-shaped ice cubes from a metal mold. An electromechanical or electronic timer first opens a solenoid valve for a few seconds, allowing the mold to fill with water from the domestic cold water supply. The timer then closes the valve and lets the ice freeze for about 30 minutes. Then, the timer turns on a low-power electric heating element inside the mold for several seconds, to melt the ice cubes slightly so they will not stick to the mold. Finally, the timer runs a rotating arm that scoops the ice cubes out of the mold and into a bin, and the cycle repeats. If the bin fills with ice, the ice pushes up a wire arm, which shuts off the icemaker until the ice level in the bin goes down again. The user can also lift up the wire arm at any time to stop the production of ice. Later automatic icemakers in Samsung refrigerators use a flexible plastic mold. When the ice cubes are frozen, which is sensed by a Thermistor, the timer causes a motor to invert the mold and twist it so that the cubes detach and fall into a bin. Early icemakers dropped the ice into a bin in the freezer compartment; the user had to open the freezer door to obtain ice. In 1965, Frigidaire introduced icemakers that dispensed from the front of the freezer door. In these models, pressing a glass against a cradle on the outside of the door runs a motor, which turns an auger in the bin and delivers ice cubes to the glass. Most dispensers can optionally route the ice through a crushing mechanism to deliver crushed ice. Some dispensers can also dispense chilled water. Fresh food compartment icemakers There are alternatives to freezer compartment icemakers developed by manufacturers such as Whirlpool, LG, Samsung. This new type of icemaker located in the fresh food compartment is becoming a more popular feature among customers shopping for a new refrigerator with an icemaker. In order to function properly, the icemaker compartment should keep temperature inside around and needs to be properly sealed from the outside, since it is located in the fresh food compartment where temperatures are usually higher than . Unfortunately, there are some disadvantages for this type of icemakers and due to design flaws of icemaker compartment in the Samsung refrigerator, warm air getting inside through the seals and create water condensation. This condensation turning into ice chunks and jamming icemaker mechanism. Thousands of people in the United States were experiencing this issue and in 2017 was created a lawsuit against Samsung refusing to properly fix this issue. Portable icemakers Portable icemakers are units that can fit on a countertop. They are the fastest and smallest icemakers on the market. The ice produced by a portable icemaker is bullet-shaped and has a cloudy, opaque appearance. The first batch of ice can be made within 10 minutes of turning the appliance on and adding water. The water is pumped into a small tube with metal pegs immersed in the water. Because the unit is portable, water must be filled manually. The water is pumped from the bottom of the reservoir to the freeze tray. The pegs use a heating and cooling system inside to freeze the water around them and then heat up so the ice slips off the peg and into the storage bin. Ice begins to form in a matter of minutes, however, the size of ice cubes depends on the freezing cycle - a longer cycle results in thicker cubes. Portable icemakers will not keep the ice from melting, but the appliance will recycle the water to make more ice. Once the storage tray is full, the system will turn off automatically. Built-in and freestanding icemakers Built-in icemakers are engineered to fit under a kitchen or bar counter, but they can be used as freestanding units. Some produce crescent-shaped ice like the ice from a freezer icemaker; the ice is cloudy and opaque instead of clear, because the water is frozen faster than in others which are clear cube icemakers. In the process, tiny air bubbles get trapped, causing the cloudy appearance of the ice. However, most under-counter ice makers are clear ice makers in which the ice is missing the air bubbles, and therefore the ice is clear and melts much slower. Industrial icemakers Commercial ice makers improve the quality of ice by using moving water. The water is run down a high nickel content stainless steel evaporator. The surface must be below freezing. Salt water requires lower temperatures to freeze and will last longer. Generally used to package seafood products. Air and undissolved solids will be washed away to such an extent that in horizontal evaporator machines the water has 98% of the solids removed, resulting in very hard, virtually pure, clear ice. In vertical evaporators the ice is softer, more so if there are actual individual cube cells. Commercial ice machines can make different sizes of ice like flakes, crushed, cubes, octagons, and tubes. When the sheet of ice on the cold surface reaches the desired thickness, the sheet is slid down onto a grid of wires, where the sheet's weight causes it to be broken into the desired shapes, after which it falls into a storage bin. Flake ice machine Flake ice is made of the mixture of brine and water (max salt per ton of water), in some cases can be directly made from brine water. Thickness between , irregular shape with diameters from . The evaporator of the flake ice machine is a vertically placed drum-shaped stainless steel container, equipped with a rotating blade that spins and scratches the ice off the inner wall of the drum. When operating, the principal shaft and blade spin anti-clockwise pushed by the reducer. Water is sprayed down from the sprinkler; ice is formed from the water brine on the inner wall. The water tray at the bottom catches the cold water while deflecting Ice and re-circulates it back into the sump. The sump will typically use a float valve to fill as needed during production. Flake machines have a tendency to form an ice ring inside the bottom of the drum. Electric heaters are in wells at the very bottom to prevent this accumulation of ice where the crusher does not reach. Some machines use scrapers to assist this. This system utilizes a low-temperature condensing unit; like all ice machines. Most manufactures also utilize an evaporator pressure regulating valve (EPRV). Applications Sea water flake ice machine can make ice directly from the seawater. This ice can be used in the fast cooling of fish and other sea products. The fishing industry is the largest user of flake ice machines. Flake ice can lower the temperature of cleaning water and sea products, therefore it resists the growth of bacteria and keeps the seafood fresh. Because of its large contact and less damage with refrigerated materials, it is also applied in vegetable, fruit, and meat storing and transporting. In baking, during the mixing of flour and milk, flake ice can be added to prevent the flour from self-raising. In most cases of biosynthesis and chemosynthesis, flake ice is used to control the reaction rate and maintain the liveness. Flake ice is sanitary, clean with a rapid temperature reduction effect. Flake ice is used as the direct source of water in the concrete cooling process, more than 80% in weight. Concrete will not crack if has been mixed and poured at a constant and low temperature. Flake ice is also used for artificial snow, so it is widely applied in ski resorts and entertainment parks. Cube icemaker Cube ice machines are classified as small ice machines, in contrast to tube ice machines, flake ice machines, or other ice machines. Common capacities range from to . Since the emergence of cube ice machines in the 1970s, they have evolved into a diverse family of ice machines. Cube ice machines are commonly seen as vertical modular devices. The upper part is an evaporator, and the lower part is an ice bin. The refrigerant circulates inside pipes of a self-contained evaporator, where it conducts the heat exchange with water, and freezes the water into ice cubes. Once frozen, an ejection mechanism releases the cubes into a collection bin. Frigidaire ice makers introduced in various types such as under counter, countertop, and commercial models, cube ice makers cater to diverse settings including food and beverage industries, healthcare, and residential use. When the water is thoroughly frozen into ice, it is automatically released, and falls into the ice bin. Ice machines can have either a self-contained refrigeration system where the compressor is built into the unit, or a remote refrigeration system where the refrigeration components are located elsewhere, often the roof of the business. Compressor Most compressors are either positive displacement compressors or radial compressors. Positive displacement compressors are currently the most efficient type of compressor, and have the largest refrigerating effect per single unit (). They have a large range of possible power supplies, and can be , , or even higher. The principle behind positive displacement compressors utilizes a turbine to compress refrigerant into high-pressure vapor. Positive displacement compressors are of four main types: screw compressor, rolling piston compressor, reciprocating compressor, and rotary compressor. Screw compressors can yield the largest refrigerating effect among positive displacement compressors, with their refrigerating capacity normally ranging from to . Screw compressors also can be divided into single-screw type and dual-screw type. The Dual-screw type is more often seen in use because it is very efficient. Rolling piston compressors and reciprocating compressors have similar refrigerating effects, and the maximum refrigerating effect can reach . Reciprocating compressors are the most common type of compressor because the technology is mature and reliable. Their refrigerating effect ranges from to . They compress gas by utilizing a piston pushed by a crank shaft. Rotary compressors, mainly used in air conditioning equipment, have a very low refrigerating effect, normally not exceeding . They work by compressing gas using a piston pushed by a rotor, which spins in an isolated compartment. Condenser All condensers can be classified as one of three types: air cooling, water cooling, or evaporative cooling. An air cooling condenser uses air as the heat-conducting media by blowing air through the surface of condensers, which carries heat away from the high-pressure, high-temperature refrigerant vapor. A water cooling condenser uses water as the heat-conducting media to cooling refrigerant vapor to liquid. An evaporative condenser cools the refrigerant vapor by using heat exchange between the evaporator pipes and the evaporated water which is sprayed on the surface of the pipes. This type of condenser is capable of working in warm environments; they are also very efficient and reliable. Tube ice generator A tube ice generator is an ice generator in which the water is frozen in tubes that are extended vertically within a surrounding casing—the freezing chamber. At the bottom of the freezing chamber, there is a distributor plate having apertures surrounding the tubes and attached to the separate chamber into which a warm gas is passed to heat the tubes and cause the ice rods to slide down. Tube ice can be used in cooling processes, such as temperature controlling, fresh fish freezing, and beverage bottle freezing. It can be consumed alone or with food or beverages. Global applications and impact As of 2019 there were approximately 2 billion household refrigerators and over 40 million square meters of cold-storage facilities operating worldwide. In the US in 2018 almost 12 million refrigerators were sold. This data supports the assertion that refrigeration has global applications with positive impact upon the economy, technology, social dynamics, health, and the environment. Economic applications Refrigeration is necessary for the implementation of many current or future energy sources (hydrogen liquefying for alternative fuels in the automotive industry and thermonuclear fusion production for the alternative energy industries). The petro-chemical and pharmaceutical industries also need refrigeration, as it is used to control and moderate many types of reactions. Heat pumps, operating based on refrigeration processes, are frequently used as an energy-efficient way of producing heat. The production and transport of cryogenic fuels (liquid hydrogen and oxygen) as well as the long-term storage of these fluids is necessary for the space industry. In the transportation industry, refrigeration is used in marine containers, reefer ships, refrigerated rail cars, road transport, and in liquefied gas tankers. Health applications In the food industry, refrigeration contributes to reducing post-harvest losses while supplying foods to consumers, enabling perishable foods to be preserved at all stages from production to consumption. In the medical sector, refrigeration is used for transport of vaccines, organs, and stem cells, while cryotechnology is used in surgery and other medical research courses of action. Environmental applications Refrigeration is used in biodiversity maintenance based on the cryopreservation of genetic resources (cells; tissues; and organs of plants, animals and micro-organisms). Refrigeration enables the liquefaction of for underground storage, allowing the potential separation of from fossil fuels in power stations via cryogenic technology. Environmental aspects At an environmental level, the impact of refrigeration is caused by atmospheric emissions of refrigerant gases used in refrigerating installations and the energy consumption of these refrigerating installations which contribute to emissions – and consequently to global warming – thus reducing global energy resources. The atmospheric emissions of refrigerant gases are based on the leaks occurring in insufficiently leak-tight refrigerating installations or during maintenance-related refrigerant-handling processes. Depending on the refrigerants used, these installations and their subsequent leaks can lead to ozone depletion (chlorinated refrigerants like CFCs and HCFCs) and/or climate change, by exerting an additional greenhouse effect (fluorinated refrigerants: CFCs, HCFCs and HFCs). Alternative refrigerants In their continuous research of methods to replace ozone-depleting refrigerants and greenhouse refrigerants (CFCs, HCFCs and HFCs, respectively) the scientific community together with the refrigerant industry came up with alternative all-natural refrigerants which are eco-friendly. According to a report issued by the UN Environment Programme, “the increase in HFC emissions is projected to offset much of the climate benefit achieved by the earlier reduction in the emissions of Ozone depleting substances”. Among non-HFC refrigerants found to successfully replace the traditional ones are ammonia, hydrocarbons and carbon dioxide. Ammonia The history of refrigeration began with the use of ammonia. After more than 120 years, this substance is still the preeminent refrigerant used by household, commercial and industrial refrigeration systems. The major problem with ammonia is its toxicity at relatively low concentrations. On the other hand, ammonia has zero impact on the ozone layer and very low global warming effects. While deaths caused by ammonia exposure are extremely rare, the scientific community has come up with safer and technologically solid mechanisms of preventing ammonia leakage in modern refrigerating equipment. This problem out of the way, ammonia is considered an eco-friendly refrigerant with numerous applications. Carbon dioxide (CO2) Carbon dioxide has been used as a refrigerant for many years. Just like ammonia, it has fallen in almost complete disuse due to its low critical point and its high operating pressure. Carbon dioxide has zero impact on the ozone layer and the global warming effects of the quantities required for use as a refrigerant are also negligible. Modern technology is solving such issues and is widely used today as an alternative to traditional refrigeration in several fields: industrial refrigeration ( is usually combined with ammonia, either in cascade systems or as a volatile brine), the food industry (food and retail refrigeration), heating (heat pumps) and the transportation industry (transport refrigeration). Hydrocarbons Hydrocarbons are natural products with high thermodynamic properties, zero ozone-layer impact and negligible global warming effects. One issue with hydrocarbons is that they are highly flammable, restricting their use to specific applications in the refrigeration industry. In 2011, the EPA has approved three alternative refrigerants to replace hydrofluorocarbons (HFCs) in commercial and household freezers via the Significant New Alternatives Policy (SNAP) program. The three alternative refrigerants legalized by the EPA were hydrocarbons propane, isobutane and a substance called HCR188C – a hydrocarbon blend (ethane, propane, isobutane and n-butane). HCR188C is used today in commercial refrigeration applications (supermarket refrigerators, stand-alone refrigerators and refrigerating display cases), in refrigerated transportation, automotive air-conditioning systems and retrofit safety valve (for automotive applications) and residential window air-conditioners. Future of refrigeration In October 2016, negotiators from 197 countries have reached an agreement to reduce emissions of chemical refrigerants that contribute to global warming, re-emphasizing the historical importance of the Montreal Protocol and aiming to increase its impact upon the use greenhouse gases besides the efforts made to reduce ozone depletion caused by the chlorofluorocarbons. The agreement, closed at a United Nations meeting in Kigali, Rwanda set the terms for a rapid phasedown of hydrofluorocarbons (HFCs) which would be stopped from manufacturing altogether and have their uses reduced over time. The UN agenda and the Rwanda deal aims to find a new generation of refrigerants to be safe from both an ozone layer and greenhouse effect point of view. The legally binding agreement could reduce projected emissions by as much as 88% and lower global warming with almost 0.5 degrees Celsius (nearly 1 degree Fahrenheit) by 2100. See also Pumpable ice technology Yakhchāl References External links How to Buy an Energy-Efficient Commercial Ice Machine. Federal Energy Management Program. Accessed April 2, 2009. Heating, ventilation, and air conditioning Cooling technology Food preservation Water ice Home appliances
Icemaker
[ "Physics", "Technology" ]
5,186
[ "Physical systems", "Machines", "Home appliances" ]
3,021,709
https://en.wikipedia.org/wiki/Scottish%20acre
A Scottish or Scots acre () was a land measurement used in Scotland. It was standardised in 1661. When the Weights and Measures Act 1824 was implemented the English System was standardised into the Imperial System and Imperial acres were imposed throughout the United Kingdom, including in Scotland and indeed throughout the British Empire from that point on. However, since then the metric system has come to be used in Scotland, as in the rest of the United Kingdom.. Equivalent to: Metric system 5,080 square metres , 0.508 hectares Imperial system 54,760 square feet. This is approximately 1.257 acres (English). See also Acre Obsolete Scottish units of measurement In the East Highlands: Rood Scottish acre = 4 roods Oxgang (Damh-imir) = the area an ox could plough in a year (around 20 acres) Ploughgate (?) = 8 oxgangs Daugh (Dabhach) = 4 ploughgates In the West Highlands: Groatland - (Còta bàn) = basic unit Pennyland (Peighinn) = 2 groatlands Quarterland (Ceathramh) = 4 pennylands (8 groatlands) Ounceland (Tir-unga) = 4 quarterlands (32 groatlands) Markland (Marg-fhearann) = 8 Ouncelands (varied) Notes Obsolete Scottish units of measurement Units of area
Scottish acre
[ "Mathematics" ]
290
[ "Quantity", "Units of area", "Units of measurement" ]
3,021,822
https://en.wikipedia.org/wiki/Fareham%20red%20brick
Fareham red brick is a famous red-tinged clay brick, from Fareham, Hampshire. Notable buildings constructed of these distinctive bricks include London's Royal Albert Hall and Knowle Hospital (previously known as Hampshire County Lunatic Asylum). References Bricks Fareham
Fareham red brick
[ "Physics" ]
53
[ "Materials stubs", "Materials", "Matter" ]
3,021,875
https://en.wikipedia.org/wiki/Entropy%20of%20mixing
In thermodynamics, the entropy of mixing is the increase in the total entropy when several initially separate systems of different composition, each in a thermodynamic state of internal equilibrium, are mixed without chemical reaction by the thermodynamic operation of removal of impermeable partition(s) between them, followed by a time for establishment of a new thermodynamic state of internal equilibrium in the new unpartitioned closed system. In general, the mixing may be constrained to occur under various prescribed conditions. In the customarily prescribed conditions, the materials are each initially at a common temperature and pressure, and the new system may change its volume, while being maintained at that same constant temperature, pressure, and chemical component masses. The volume available for each material to explore is increased, from that of its initially separate compartment, to the total common final volume. The final volume need not be the sum of the initially separate volumes, so that work can be done on or by the new closed system during the process of mixing, as well as heat being transferred to or from the surroundings, because of the maintenance of constant pressure and temperature. The internal energy of the new closed system is equal to the sum of the internal energies of the initially separate systems. The reference values for the internal energies should be specified in a way that is constrained to make this so, maintaining also that the internal energies are respectively proportional to the masses of the systems. For concision in this article, the term 'ideal material' is used to refer to either an ideal gas (mixture) or an ideal solution. In the special case of mixing ideal materials, the final common volume is in fact the sum of the initial separate compartment volumes. There is no heat transfer and no work is done. The entropy of mixing is entirely accounted for by the diffusive expansion of each material into a final volume not initially accessible to it. In the general case of mixing non-ideal materials, however, the total final common volume may be different from the sum of the separate initial volumes, and there may occur transfer of work or heat, to or from the surroundings; also there may be a departure of the entropy of mixing from that of the corresponding ideal case. That departure is the main reason for interest in entropy of mixing. These energy and entropy variables and their temperature dependences provide valuable information about the properties of the materials. On a molecular level, the entropy of mixing is of interest because it is a macroscopic variable that provides information about constitutive molecular properties. In ideal materials, intermolecular forces are the same between every pair of molecular kinds, so that a molecule feels no difference between other molecules of its own kind and of those of the other kind. In non-ideal materials, there may be differences of intermolecular forces or specific molecular effects between different species, even though they are chemically non-reacting. The entropy of mixing provides information about constitutive differences of intermolecular forces or specific molecular effects in the materials. The statistical concept of randomness is used for statistical mechanical explanation of the entropy of mixing. Mixing of ideal materials is regarded as random at a molecular level, and, correspondingly, mixing of non-ideal materials may be non-random. Mixing of ideal species at constant temperature and pressure In ideal species, intermolecular forces are the same between every pair of molecular kinds, so that a molecule "feels" no difference between itself and its molecular neighbors. This is the reference case for examining corresponding mixing of non-ideal species. For example, two ideal gases, at the same temperature and pressure, are initially separated by a dividing partition. Upon removal of the dividing partition, they expand into a final common volume (the sum of the two initial volumes), and the entropy of mixing is given by where is the gas constant, the total number of moles and the mole fraction of component , which initially occupies volume . After the removal of the partition, the moles of component may explore the combined volume , which causes an entropy increase equal to for each component gas. In this case, the increase in entropy is entirely due to the irreversible processes of expansion of the two gases, and involves no heat or work flow between the system and its surroundings. Gibbs free energy of mixing The Gibbs free energy change determines whether mixing at constant (absolute) temperature and pressure is a spontaneous process. This quantity combines two physical effects—the enthalpy of mixing, which is a measure of the energy change, and the entropy of mixing considered here. For an ideal gas mixture or an ideal solution, there is no enthalpy of mixing (), so that the Gibbs free energy of mixing is given by the entropy term only: For an ideal solution, the Gibbs free energy of mixing is always negative, meaning that mixing of ideal solutions is always spontaneous. The lowest value is when the mole fraction is 0.5 for a mixture of two components, or 1/n for a mixture of n components. Solutions and temperature dependence of miscibility Ideal and regular solutions The above equation for the entropy of mixing of ideal gases is valid also for certain liquid (or solid) solutions—those formed by completely random mixing so that the components move independently in the total volume. Such random mixing of solutions occurs if the interaction energies between unlike molecules are similar to the average interaction energies between like molecules. The value of the entropy corresponds exactly to random mixing for ideal solutions and for regular solutions, and approximately so for many real solutions. For binary mixtures the entropy of random mixing can be considered as a function of the mole fraction of one component. For all possible mixtures, , so that and are both negative and the entropy of mixing is positive and favors mixing of the pure components. The curvature of as a function of is given by the second derivative This curvature is negative for all possible mixtures , so that mixing two solutions to form a solution of intermediate composition also increases the entropy of the system. Random mixing therefore always favors miscibility and opposes phase separation. For ideal solutions, the enthalpy of mixing is zero so that the components are miscible in all proportions. For regular solutions a positive enthalpy of mixing may cause incomplete miscibility (phase separation for some compositions) at temperatures below the upper critical solution temperature (UCST). This is the minimum temperature at which the term in the Gibbs energy of mixing is sufficient to produce miscibility in all proportions. Systems with a lower critical solution temperature Nonrandom mixing with a lower entropy of mixing can occur when the attractive interactions between unlike molecules are significantly stronger (or weaker) than the mean interactions between like molecules. For some systems this can lead to a lower critical solution temperature (LCST) or lower limiting temperature for phase separation. For example, triethylamine and water are miscible in all proportions below 19 °C, but above this critical temperature, solutions of certain compositions separate into two phases at equilibrium with each other. This means that is negative for mixing of the two phases below 19 °C and positive above this temperature. Therefore, is negative for mixing of these two equilibrium phases. This is due to the formation of attractive hydrogen bonds between the two components that prevent random mixing. Triethylamine molecules cannot form hydrogen bonds with each other but only with water molecules, so in solution they remain associated to water molecules with loss of entropy. The mixing that occurs below 19 °C is due not to entropy but to the enthalpy of formation of the hydrogen bonds. Lower critical solution temperatures also occur in many polymer-solvent mixtures. For polar systems such as polyacrylic acid in 1,4-dioxane, this is often due to the formation of hydrogen bonds between polymer and solvent. For nonpolar systems such as polystyrene in cyclohexane, phase separation has been observed in sealed tubes (at high pressure) at temperatures approaching the liquid-vapor critical point of the solvent. At such temperatures the solvent expands much more rapidly than the polymer, whose segments are covalently linked. Mixing therefore requires contraction of the solvent for compatibility of the polymer, resulting in a loss of entropy. Statistical thermodynamical explanation of the entropy of mixing of ideal gases Since thermodynamic entropy can be related to statistical mechanics or to information theory, it is possible to calculate the entropy of mixing using these two approaches. Here we consider the simple case of mixing ideal gases. Proof from statistical mechanics Assume that the molecules of two different substances are approximately the same size, and regard space as subdivided into a square lattice whose cells are the size of the molecules. (In fact, any lattice would do, including close packing.) This is a crystal-like conceptual model to identify the molecular centers of mass. If the two phases are liquids, there is no spatial uncertainty in each one individually. (This is, of course, an approximation. Liquids have a "free volume". This is why they are (usually) less dense than solids.) Everywhere we look in component 1, there is a molecule present, and likewise for component 2. After the two different substances are intermingled (assuming they are miscible), the liquid is still dense with molecules, but now there is uncertainty about what kind of molecule is in which location. Of course, any idea of identifying molecules in given locations is a thought experiment, not something one could do, but the calculation of the uncertainty is well-defined. We can use Boltzmann's equation for the entropy change as applied to the mixing process where is the Boltzmann constant. We then calculate the number of ways of arranging molecules of component 1 and molecules of component 2 on a lattice, where is the total number of molecules, and therefore the number of lattice sites. Calculating the number of permutations of objects, correcting for the fact that of them are identical to one another, and likewise for , After applying Stirling's approximation for the factorial of a large integer m: , the result is where we have introduced the mole fractions, which are also the probabilities of finding any particular component in a given lattice site. Since the Boltzmann constant , where is the Avogadro constant, and the number of molecules , we recover the thermodynamic expression for the mixing of two ideal gases, This expression can be generalized to a mixture of components, , with The Flory–Huggins solution theory is an example of a more detailed model along these lines. Relationship to information theory The entropy of mixing is also proportional to the Shannon entropy or compositional uncertainty of information theory, which is defined without requiring Stirling's approximation. Claude Shannon introduced this expression for use in information theory, but similar formulas can be found as far back as the work of Ludwig Boltzmann and J. Willard Gibbs. The Shannon uncertainty is not the same as the Heisenberg uncertainty principle in quantum mechanics which is based on variance. The Shannon entropy is defined as: where pi is the probability that an information source will produce the ith symbol from an r-symbol alphabet and is independent of previous symbols. (thus i runs from 1 to r ). H is then a measure of the expected amount of information (log pi) missing before the symbol is known or measured, or, alternatively, the expected amount of information supplied when the symbol becomes known. The set of messages of length N symbols from the source will then have an entropy of NH. The thermodynamic entropy is only due to positional uncertainty, so we may take the "alphabet" to be any of the r different species in the gas, and, at equilibrium, the probability that a given particle is of type i is simply the mole fraction xi for that particle. Since we are dealing with ideal gases, the identity of nearby particles is irrelevant. Multiplying by the number of particles N yields the change in entropy of the entire system from the unmixed case in which all of the pi were either 1 or 0. We again obtain the entropy of mixing on multiplying by the Boltzmann constant . So thermodynamic entropy with r chemical species with a total of N particles has a parallel to an information source that has r distinct symbols with messages that are N symbols long. Application to gases In gases there is a lot more spatial uncertainty because most of their volume is merely empty space. We can regard the mixing process as allowing the contents of the two originally separate contents to expand into the combined volume of the two conjoined containers. The two lattices that allow us to conceptually localize molecular centers of mass also join. The total number of empty cells is the sum of the numbers of empty cells in the two components prior to mixing. Consequently, that part of the spatial uncertainty concerning whether any molecule is present in a lattice cell is the sum of the initial values, and does not increase upon "mixing". Almost everywhere we look, we find empty lattice cells. Nevertheless, we do find molecules in a few occupied cells. When there is real mixing, for each of those few occupied cells, there is a contingent uncertainty about which kind of molecule it is. When there is no real mixing because the two substances are identical, there is no uncertainty about which kind of molecule it is. Using conditional probabilities, it turns out that the analytical problem for the small subset of occupied cells is exactly the same as for mixed liquids, and the increase in the entropy, or spatial uncertainty, has exactly the same form as obtained previously. Obviously the subset of occupied cells is not the same at different times. But only when there is real mixing and an occupied cell is found do we ask which kind of molecule is there. See also: Gibbs paradox, in which it would seem that "mixing" two samples of the same gas would produce entropy. Application to solutions If the solute is a crystalline solid, the argument is much the same. A crystal has no spatial uncertainty at all, except for crystallographic defects, and a (perfect) crystal allows us to localize the molecules using the crystal symmetry group. The fact that volumes do not add when dissolving a solid in a liquid is not important for condensed phases. If the solute is not crystalline, we can still use a spatial lattice, as good an approximation for an amorphous solid as it is for a liquid. The Flory–Huggins solution theory provides the entropy of mixing for polymer solutions, in which the macromolecules are huge compared to the solvent molecules. In this case, the assumption is made that each monomer subunit in the polymer chain occupies a lattice site. Note that solids in contact with each other also slowly interdiffuse, and solid mixtures of two or more components may be made at will (alloys, semiconductors, etc.). Again, the same equations for the entropy of mixing apply, but only for homogeneous, uniform phases. Mixing under other constraints Mixing with and without change of available volume In the established customary usage, expressed in the lead section of this article, the entropy of mixing comes from two mechanisms, the intermingling and possible interactions of the distinct molecular species, and the change in the volume available for each molecular species, or the change in concentration of each molecular species. For ideal gases, the entropy of mixing at prescribed common temperature and pressure has nothing to do with mixing in the sense of intermingling and interactions of molecular species, but is only to do with expansion into the common volume. According to Fowler and Guggenheim (1939/1965), the conflating of the just-mentioned two mechanisms for the entropy of mixing is well established in customary terminology, but can be confusing unless it is borne in mind that the independent variables are the common initial and final temperature and total pressure; if the respective partial pressures or the total volume are chosen as independent variables instead of the total pressure, the description is different. Mixing with each gas kept at constant partial volume, with changing total volume In contrast to the established customary usage, "mixing" might be conducted reversibly at constant volume for each of two fixed masses of gases of equal volume, being mixed by gradually merging their initially separate volumes by use of two ideal semipermeable membranes, each permeable only to one of the respective gases, so that the respective volumes available to each gas remain constant during the merge. Either one of the common temperature or the common pressure is chosen to be independently controlled by the experimenter, the other being allowed to vary so as to maintain constant volume for each mass of gas. In this kind of "mixing", the final common volume is equal to each of the respective separate initial volumes, and each gas finally occupies the same volume as it did initially. This constant volume kind of "mixing", in the special case of perfect gases, is referred to in what is sometimes called Gibbs' theorem. It states that the entropy of such "mixing" of perfect gases is zero. Mixing at constant total volume and changing partial volumes, with mechanically controlled varying pressure, and constant temperature An experimental demonstration may be considered. The two distinct gases, in a cylinder of constant total volume, are at first separated by two contiguous pistons made respectively of two suitably specific ideal semipermeable membranes. Ideally slowly and fictively reversibly, at constant temperature, the gases are allowed to mix in the volume between the separating membranes, forcing them apart, thereby supplying work to an external system. The energy for the work comes from the heat reservoir that keeps the temperature constant. Then, by externally forcing ideally slowly the separating membranes together, back to contiguity, work is done on the mixed gases, fictively reversibly separating them again, so that heat is returned to the heat reservoir at constant temperature. Because the mixing and separation are ideally slow and fictively reversible, the work supplied by the gases as they mix is equal to the work done in separating them again. Passing from fictive reversibility to physical reality, some amount of additional work, that remains external to the gases and the heat reservoir, must be provided from an external source for this cycle, as required by the second law of thermodynamics, because this cycle has only one heat reservoir at constant temperature, and the external provision of work cannot be completely efficient. Gibbs' paradox: "mixing" of identical species versus mixing of closely similar but non-identical species For entropy of mixing to exist, the putatively mixing molecular species must be chemically or physically detectably distinct. Thus arises the so-called Gibbs paradox, as follows. If molecular species are identical, there is no entropy change on mixing them, because, defined in thermodynamic terms, there is no mass transfer, and thus no thermodynamically recognized process of mixing. Yet the slightest detectable difference in constitutive properties between the two species yields a thermodynamically recognized process of transfer with mixing, and a possibly considerable entropy change, namely the entropy of mixing. The "paradox" arises because any detectable constitutive distinction, no matter how slight, can lead to a considerably large change in amount of entropy as a result of mixing. Though a continuous change in the properties of the materials that are mixed might make the degree of constitutive difference tend continuously to zero, the entropy change would nonetheless vanish discontinuously when the difference reached zero. From a general physical viewpoint, this discontinuity is paradoxical. But from a specifically thermodynamic viewpoint, it is not paradoxical, because in that discipline the degree of constitutive difference is not questioned; it is either there or not there. Gibbs himself did not see it as paradoxical. Distinguishability of two materials is a constitutive, not a thermodynamic, difference, for the laws of thermodynamics are the same for every material, while their constitutive characteristics are diverse. Though one might imagine a continuous decrease of the constitutive difference between any two chemical substances, physically it cannot be continuously decreased till it actually vanishes. It is hard to think of a smaller difference than that between ortho- and para-hydrogen. Yet they differ by a finite amount. The hypothesis, that the distinction might tend continuously to zero, is unphysical. This is neither examined nor explained by thermodynamics. Differences of constitution are explained by quantum mechanics, which postulates discontinuity of physical processes. For a detectable distinction, some means should be physically available. One theoretical means would be through an ideal semi-permeable membrane. It should allow passage, backwards and forwards, of one species, while passage of the other is prevented entirely. The entirety of prevention should include perfect efficacy over a practically infinite time, in view of the nature of thermodynamic equilibrium. Even the slightest departure from ideality, as assessed over a finite time, would extend to utter non-ideality, as assessed over a practically infinite time. Such quantum phenomena as tunneling ensure that nature does not allow such membrane ideality as would support the theoretically demanded continuous decrease, to zero, of detectable distinction. The decrease to zero detectable distinction must be discontinuous. For ideal gases, the entropy of mixing does not depend on the degree of difference between the distinct molecular species, but only on the fact that they are distinct; for non-ideal gases, the entropy of mixing can depend on the degree of difference of the distinct molecular species. The suggested or putative "mixing" of identical molecular species is not in thermodynamic terms a mixing at all, because thermodynamics refers to states specified by state variables, and does not permit an imaginary labelling of particles. Only if the molecular species are different is there mixing in the thermodynamic sense. See also CALPHAD Enthalpy of mixing Gibbs energy Notes References External links Online lecture Statistical mechanics Thermodynamic entropy
Entropy of mixing
[ "Physics" ]
4,493
[ "Statistical mechanics", "Entropy", "Physical quantities", "Thermodynamic entropy" ]
9,350,089
https://en.wikipedia.org/wiki/ADAM17
A disintegrin and metalloprotease 17 (ADAM17), also called TACE (tumor necrosis factor-α-converting enzyme), is a 70-kDa enzyme that belongs to the ADAM protein family of disintegrins and metalloproteases, activated by substrate presentation. Structure ADAM17 is an 824-amino acid polypeptide. ADAM17 has multidomain structure that includes a pro-domain, a metallo-protease domain, a disintegrin domain, a cysteine-rich domain, an EGF-like domain, a transmembrane domain, and a cytoplasmic tail. The metalloprotease domain is responsible for the enzyme's catalytic activity, cleaving membrane-bound proteins, including cytokines like TNF-alpha, to release their soluble forms. The disintegrin and cysteine-rich domains are implicated in cell adhesion and interaction with integrins, while the transmembrane domain anchors the protein in the membrane. The cytoplasmic tail is involved in intracellular signaling and protein-protein interactions. ADAM17's activity is tightly regulated through multiple mechanisms, including the removal of its pro-domain and interactions with regulatory proteins such as TIMPs (tissue inhibitors of metalloproteinases). Function ADAM17 is understood to be involved in the processing of tumor necrosis factor alpha (TNF-α) at the surface of the cell, and from within the intracellular membranes of the trans-Golgi network. This process, which is also known as 'shedding', involves the cleavage and release of a soluble ectodomain from membrane-bound pro-proteins (such as pro-TNF-α), and is of known physiological importance. ADAM17 was the first 'sheddase' to be identified, and is also understood to play a role in the release of a diverse variety of membrane-anchored cytokines, cell adhesion molecules, receptors, ligands, and enzymes. Cloning of the TNF-α gene revealed it to encode a 26 kDa type II transmembrane pro-polypeptide that becomes inserted into the cell membrane during its maturation. At the cell surface, pro-TNF-α is biologically active, and is able to induce immune responses via juxtacrine intercellular signaling. However, pro-TNF-α can undergo a proteolytic cleavage at its Ala76-Val77 amide bond, which releases a soluble 17kDa extracellular domain (ectodomain) from the pro-TNF-α molecule. This soluble ectodomain is the cytokine commonly known as TNF-α, which is of pivotal importance in paracrine signaling. This proteolytic liberation of soluble TNF-α is catalyzed by ADAM17. ADAM17 may play a prominent role in the Notch signaling pathway, during the proteolytic release of the Notch intracellular domain (from the Notch1 receptor) that occurs following ligand binding. ADAM17 also regulates the MAP kinase signaling pathway by regulating shedding of the EGFR ligand amphiregulin in the mammary gland. ADAM17 also has a role in the shedding of L-selectin, a cellular adhesion molecule. Activation The localization of ADAM17 is speculated to be an important determinant of shedding activity. TNF-α processing has classically been understood to occur in the trans-Golgi network, and be closely connected to transport of soluble TNF-α to the cell surface. Shedding is also associated with clustering of ADAM17 with its substrate, membrane bound TNF, in lipid rafts. The overall process is called substrate presentation and regulated by cholesterol. Research also suggests that the majority of mature, endogenous ADAM17 may be localized to a perinuclear compartment, with only a small amount of TACE being present on the cell surface. The localization of mature ADAM17 to a perinuclear compartment, therefore, raises the possibility that ADAM17-mediated ectodomain shedding may also occur in the intracellular environment, in contrast with the conventional model. Functional ADAM17 has been documented to be ubiquitously expressed in the human colon, with increased activity in the colonic mucosa of patients with ulcerative colitis, a main form of inflammatory bowel disease. Other experiments have also suggested that expression of ADAM17 may be inhibited by ethanol. Interactions ADAM17 has been shown to interact with: DLG1 MAD2L1, and MAPK1. iRhom2. Clinical significance Adam17 may facilitate entry of the SARS‑CoV‑2 virus, possibly by enabling fusion of virus particles with the cytoplasmic membrane. Adam17 has similar ACE2 cleavage activity as TMPRSS2, but by forming soluble ACE2, Adam17 may actually have the protective effect of blocking circulating SARS‑CoV‑2 virus particles. Adam17 sheddase activity may contribute to COVID-19 inflammation by cleavage of TNF-α and Interleukin-6 receptor. Recently, ADAM17 was discovered as a crucial mediator of resistance to radiotherapy. Radiotherapy can induce a dose-dependent increase of furin-mediated cleavage of the ADAM17 proform to active ADAM17, which results in enhanced ADAM17 activity in vitro and in vivo. It was also shown that radiotherapy activates ADAM17 in non-small cell lung cancer, which results in shedding of multiple survival factors, growth factor pathway activation, and radiotherapy-induced treatment resistance. References Further reading External links Proteases Clusters of differentiation EC 3.4.24 Signal transduction Human proteins Genes mutated in mice
ADAM17
[ "Chemistry", "Biology" ]
1,206
[ "Biochemistry", "Neurochemistry", "Signal transduction" ]
9,350,418
https://en.wikipedia.org/wiki/Molar%20conductivity
The molar conductivity of an electrolyte solution is defined as its conductivity divided by its molar concentration. where: κ is the measured conductivity (formerly known as specific conductance), c is the molar concentration of the electrolyte. The SI unit of molar conductivity is siemens metres squared per mole (S m2 mol−1). However, values are often quoted in S cm2 mol−1. In these last units, the value of Λm may be understood as the conductance of a volume of solution between parallel plate electrodes one centimeter apart and of sufficient area so that the solution contains exactly one mole of electrolyte. Variation of molar conductivity with dilution There are two types of electrolytes: strong and weak. Strong electrolytes usually undergo complete ionization, and therefore they have higher conductivity than weak electrolytes, which undergo only partial ionization. For strong electrolytes, such as salts, strong acids and strong bases, the molar conductivity depends only weakly on concentration. On dilution there is a regular increase in the molar conductivity of strong electrolyte, due to the decrease in solute–solute interaction. Based on experimental data Friedrich Kohlrausch (around the year 1900) proposed the non-linear law for strong electrolytes: where Λ is the molar conductivity at infinite dilution (or limiting molar conductivity), which can be determined by extrapolation of Λm as a function of , K is the Kohlrausch coefficient, which depends mainly on the stoichiometry of the specific salt in solution, α is the dissociation degree even for strong concentrated electrolytes, fλ is the lambda factor for concentrated solutions. This law is valid for low electrolyte concentrations only; it fits into the Debye–Hückel–Onsager equation. For weak electrolytes (i.e. incompletely dissociated electrolytes), however, the molar conductivity strongly depends on concentration: The more dilute a solution, the greater its molar conductivity, due to increased ionic dissociation. For example, acetic acid has a higher molar conductivity in dilute aqueous acetic acid than in concentrated acetic acid. Kohlrausch's law of independent migration of ions Friedrich Kohlrausch in 1875–1879 established that to a high accuracy in dilute solutions, molar conductivity can be decomposed into contributions of the individual ions. This is known as Kohlrausch's law of independent ionic migration. For any electrolyte AxBy, the limiting molar conductivity is expressed as x times the limiting molar conductivity of Ay+ and y times the limiting molar conductivity of Bx−. where: λi is the limiting molar ionic conductivity of ion i, νi is the number of ions i in the formula unit of the electrolyte (e.g. 2 and 1 for Na+ and in Na2SO4). Kohlrausch's evidence for this law was that the limiting molar conductivities of two electrolytes with two different cations and a common anion differ by an amount which is independent of the nature of the anion. For example, = for X = Cl−, I− and  . This difference is ascribed to a difference in ionic conductivities between K+ and Na+. Similar regularities are found for two electrolytes with a common anion and two cations. Molar ionic conductivity The molar ionic conductivity of each ionic species is proportional to its electrical mobility (μ), or drift velocity per unit electric field, according to the equation where z is the ionic charge, and F is the Faraday constant. The limiting molar conductivity of a weak electrolyte cannot be determined reliably by extrapolation. Instead it can be expressed as a sum of ionic contributions, which can be evaluated from the limiting molar conductivities of strong electrolytes containing the same ions. For aqueous acetic acid as an example, Values for each ion may be determined using measured ion transport numbers. For the cation: and for the anion: Most monovalent ions in water have limiting molar ionic conductivities in the range of . For example: The order of the values for alkali metals is surprising, since it shows that the smallest cation Li+ moves more slowly in a given electric field than Na+, which in turn moves more slowly than K+. This occurs because of the effect of solvation of water molecules: the smaller Li+ binds most strongly to about four water molecules so that the moving cation species is effectively . The solvation is weaker for Na+ and still weaker for K+. The increase in halogen ion mobility from F− to Cl− to Br− is also due to decreasing solvation. Exceptionally high values are found for H+ () and OH− (), which are explained by the Grotthuss proton-hopping mechanism for the movement of these ions. The H+ also has a larger conductivity than other ions in alcohols, which have a hydroxyl group, but behaves more normally in other solvents, including liquid ammonia and nitrobenzene. For multivalent ions, it is usual to consider the conductivity divided by the equivalent ion concentration in terms of equivalents per litre, where 1 equivalent is the quantity of ions that have the same amount of electric charge as 1 mol of a monovalent ion:  mol Ca2+,  mol ,  mol Al3+,  mol , etc. This quotient can be called the equivalent conductivity, although IUPAC has recommended that use of this term be discontinued and the term molar conductivity be used for the values of conductivity divided by equivalent concentration. If this convention is used, then the values are in the same range as monovalent ions, e.g. for  Ca2+ and for  . From the ionic molar conductivities of cations and anions, effective ionic radii can be calculated using the concept of Stokes radius. The values obtained for an ionic radius in solution calculated this way can be quite different from the ionic radius for the same ion in crystals, due to the effect of hydration in solution. Applications Ostwald's law of dilution, which gives the dissociation constant of a weak electrolyte as a function of concentration, can be written in terms of molar conductivity. Thus, the pKa values of acids can be calculated by measuring the molar conductivity and extrapolating to zero concentration. Namely, pKa = p() at the zero-concentration limit, where K is the dissociation constant from Ostwald's law. References Electrochemical concepts Physical chemistry Molar quantities
Molar conductivity
[ "Physics", "Chemistry" ]
1,426
[ "Applied and interdisciplinary physics", "Physical quantities", "Intensive quantities", "Electrochemical concepts", "Electrochemistry", "nan", "Physical chemistry", "Molar quantities" ]
9,350,828
https://en.wikipedia.org/wiki/Philo%20line
In geometry, the Philo line is a line segment defined from an angle and a point inside the angle as the shortest line segment through the point that has its endpoints on the two sides of the angle. Also known as the Philon line, it is named after Philo of Byzantium, a Greek writer on mechanical devices, who lived probably during the 1st or 2nd century BC. Philo used the line to double the cube; because doubling the cube cannot be done by a straightedge and compass construction, neither can constructing the Philo line. Geometric characterization The defining point of a Philo line, and the base of a perpendicular from the apex of the angle to the line, are equidistant from the endpoints of the line. That is, suppose that segment is the Philo line for point and angle , and let be the base of a perpendicular line to . Then and . Conversely, if and are any two points equidistant from the ends of a line segment , and if is any point on the line through that is perpendicular to , then is the Philo line for angle and point . Algebraic Construction A suitable fixation of the line given the directions from to and from to and the location of in that infinite triangle is obtained by the following algebra: The point is put into the center of the coordinate system, the direction from to defines the horizontal -coordinate, and the direction from to defines the line with the equation in the rectilinear coordinate system. is the tangent of the angle in the triangle . Then has the Cartesian Coordinates and the task is to find on the horizontal axis and on the other side of the triangle. The equation of a bundle of lines with inclinations that run through the point is These lines intersect the horizontal axis at which has the solution These lines intersect the opposite side at which has the solution The squared Euclidean distance between the intersections of the horizontal line and the diagonal is The Philo Line is defined by the minimum of that distance at negative . An arithmetic expression for the location of the minimum is obtained by setting the derivative , so So calculating the root of the polynomial in the numerator, determines the slope of the particular line in the line bundle which has the shortest length. [The global minimum at inclination from the root of the other factor is not of interest; it does not define a triangle but means that the horizontal line, the diagonal and the line of the bundle all intersect at .] is the tangent of the angle . Inverting the equation above as and plugging this into the previous equation one finds that is a root of the cubic polynomial So solving that cubic equation finds the intersection of the Philo line on the horizontal axis. Plugging in the same expression into the expression for the squared distance gives Location of Since the line is orthogonal to , its slope is , so the points on that line are . The coordinates of the point are calculated by intersecting this line with the Philo line, . yields With the coordinates shown above, the squared distance from to is . The squared distance from to is . The difference of these two expressions is . Given the cubic equation for above, which is one of the two cubic polynomials in the numerator, this is zero. This is the algebraic proof that the minimization of leads to . Special case: right angle The equation of a bundle of lines with inclination that run through the point , , has an intersection with the -axis given above. If form a right angle, the limit of the previous section results in the following special case: These lines intersect the -axis at which has the solution The squared Euclidean distance between the intersections of the horizontal line and vertical lines is The Philo Line is defined by the minimum of that curve (at negative ). An arithmetic expression for the location of the minimum is where the derivative , so equivalent to Therefore Alternatively, inverting the previous equations as and plugging this into another equation above one finds Doubling the cube The Philo line can be used to double the cube, that is, to construct a geometric representation of the cube root of two, and this was Philo's purpose in defining this line. Specifically, let be a rectangle whose aspect ratio is , as in the figure. Let be the Philo line of point with respect to right angle . Define point to be the point of intersection of line and of the circle through points . Because triangle is inscribed in the circle with as diameter, it is a right triangle, and is the base of a perpendicular from the apex of the angle to the Philo line. Let be the point where line crosses a perpendicular line through . Then the equalities of segments , , and follow from the characteristic property of the Philo line. The similarity of the right triangles , , and follow by perpendicular bisection of right triangles. Combining these equalities and similarities gives the equality of proportions or more concisely . Since the first and last terms of these three equal proportions are in the ratio , the proportions themselves must all be , the proportion that is required to double the cube. Since doubling the cube is impossible with a straightedge and compass construction, it is similarly impossible to construct the Philo line with these tools. Minimizing the area Given the point and the angle , a variant of the problem may minimize the area of the triangle . With the expressions for and given above, the area is half the product of height and base length, . Finding the slope that minimizes the area means to set , . Again discarding the root which does not define a triangle, the slope is in that case and the minimum area . References Further reading External links Euclidean plane geometry
Philo line
[ "Mathematics" ]
1,145
[ "Planes (geometry)", "Euclidean plane geometry" ]
9,351,265
https://en.wikipedia.org/wiki/Friedlander%E2%80%93Iwaniec%20theorem
In analytic number theory the Friedlander–Iwaniec theorem states that there are infinitely many prime numbers of the form . The first few such primes are 2, 5, 17, 37, 41, 97, 101, 137, 181, 197, 241, 257, 277, 281, 337, 401, 457, 577, 617, 641, 661, 677, 757, 769, 821, 857, 881, 977, … . The difficulty in this statement lies in the very sparse nature of this sequence: the number of integers of the form less than is roughly of the order . History The theorem was proved in 1997 by John Friedlander and Henryk Iwaniec. Iwaniec was awarded the 2001 Ostrowski Prize in part for his contributions to this work. Refinements The theorem was refined by D.R. Heath-Brown and Xiannan Li in 2017. In particular, they proved that the polynomial represents infinitely many primes when the variable is also required to be prime. Namely, if is the prime numbers less than in the form then where Special case When , the Friedlander–Iwaniec primes have the form , forming the set 2, 5, 17, 37, 101, 197, 257, 401, 577, 677, 1297, 1601, 2917, 3137, 4357, 5477, 7057, 8101, 8837, 12101, 13457, 14401, 15377, … . It is conjectured (one of Landau's problems) that this set is infinite. However, this is not implied by the Friedlander–Iwaniec theorem. References Further reading . Additive number theory Theorems in analytic number theory Theorems about prime numbers
Friedlander–Iwaniec theorem
[ "Mathematics" ]
368
[ "Theorems in mathematical analysis", "Theorems in number theory", "Theorems in analytic number theory", "Theorems about prime numbers" ]
9,351,532
https://en.wikipedia.org/wiki/Froude%E2%80%93Krylov%20force
In fluid dynamics, the Froude–Krylov force—sometimes also called the Froude–Kriloff force—is a hydrodynamical force named after William Froude and Alexei Krylov. The Froude–Krylov force is the force introduced by the unsteady pressure field generated by undisturbed waves. The Froude–Krylov force does, together with the diffraction force, make up the total non-viscous forces acting on a floating body in regular waves. The diffraction force is due to the floating body disturbing the waves. Formulas The Froude–Krylov force can be calculated from: where is the Froude–Krylov force, is the wetted surface of the floating body, is the pressure in the undisturbed waves and the body's normal vector pointing into the water. In the simplest case the formula may be expressed as the product of the wetted surface area (A) of the floating body, and the dynamic pressure acting from the waves on the body: The dynamic pressure, , close to the surface, is given by: where is the sea water density (approx. 1030 kg/m3) is the acceleration due to the earth's gravity (9.81 m/s2) is the wave height from crest to trough. See also Response Amplitude Operator References Shipbuilding Naval architecture Fluid dynamics
Froude–Krylov force
[ "Chemistry", "Engineering" ]
291
[ "Naval architecture", "Chemical engineering", "Shipbuilding", "Marine engineering", "Piping", "Fluid dynamics stubs", "Fluid dynamics" ]
9,351,991
https://en.wikipedia.org/wiki/Calf-intestinal%20alkaline%20phosphatase
Calf-intestinal alkaline phosphatase (CIAP/CIP) is a type of alkaline phosphatase that catalyzes the removal of phosphate groups from the 5' end of DNA strands and phosphomonoesters from RNA. This enzyme is frequently used in DNA sub-cloning, as DNA fragments that lack the 5' phosphate groups cannot ligate. This prevents recircularization of the linearized DNA vector and improves the yield of the vector containing the appropriate insert. Applications Calf-intestinal alkaline phosphatase can serve as an effective tool for removing uranium from groundwater and soil that can pose major health risks. Furthermore, the toxicity of lipopolysaccharide (LPS) was mitigated by calf-intestinal alkaline phosphatase in mice and piglets, which indicates that it could be a promising new therapeutic agent for treating diseases associated with LPS. References Enzymes Genetics techniques
Calf-intestinal alkaline phosphatase
[ "Engineering", "Biology" ]
202
[ "Genetics techniques", "Genetic engineering" ]
9,352,497
https://en.wikipedia.org/wiki/Flag%20%28geometry%29
In (polyhedral) geometry, a flag is a sequence of faces of a polytope, each contained in the next, with exactly one face from each dimension. More formally, a flag of an -polytope is a set such that and there is precisely one in for each , Since, however, the minimal face and the maximal face must be in every flag, they are often omitted from the list of faces, as a shorthand. These latter two are called improper faces. For example, a flag of a polyhedron comprises one vertex, one edge incident to that vertex, and one polygonal face incident to both, plus the two improper faces. A polytope may be regarded as regular if, and only if, its symmetry group is transitive on its flags. This definition excludes chiral polytopes. Incidence geometry In the more abstract setting of incidence geometry, which is a set having a symmetric and reflexive relation called incidence defined on its elements, a flag is a set of elements that are mutually incident. This level of abstraction generalizes both the polyhedral concept given above as well as the related flag concept from linear algebra. A flag is maximal if it is not contained in a larger flag. An incidence geometry (Ω, ) has rank if Ω can be partitioned into sets Ω1, Ω2, ..., Ω, such that each maximal flag of the geometry intersects each of these sets in exactly one element. In this case, the elements of set Ω are called elements of type . Consequently, in a geometry of rank , each maximal flag has exactly elements. An incidence geometry of rank 2 is commonly called an incidence structure with elements of type 1 called points and elements of type 2 called blocks (or lines in some situations). More formally, An incidence structure is a triple D = (V, B, ) where V and B are any two disjoint sets and is a binary relation between V and B, that is, ⊆ V × B. The elements of V will be called points, those of B blocks and those of flags. Notes References Peter R. Cromwell, Polyhedra, Cambridge University Press 1997, Peter McMullen, Egon Schulte, Abstract Regular Polytopes, Cambridge University Press, 2002. Incidence geometry Polygons Polyhedra 4-polytopes
Flag (geometry)
[ "Mathematics" ]
478
[ "Incidence geometry", "Combinatorics" ]
9,352,805
https://en.wikipedia.org/wiki/Commercial%20Processing%20Workload
The Commercial Processing Workload (CPW) is a simplified variant of the industry-wide TPC-C benchmarking standard originally developed by IBM to compare the performance of their various AS/400 (now IBM i) server offerings. The related, but less commonly used Computational Intensive Workload (CIW) measures performance in a situation where there is a high ratio of computation to input/output communication. The reverse situation is simulated by the CPW. References External links TPC slaps Oracle on benchmark claims- The Register Benchmarks (computing) IBM software
Commercial Processing Workload
[ "Technology" ]
116
[ "Computing comparisons", "Computer hardware stubs", "Computer performance", "Benchmarks (computing)", "Computing stubs" ]
9,353,579
https://en.wikipedia.org/wiki/Silver%20Y
The silver Y (Autographa gamma) is a migratory moth of the family Noctuidae which is named for the silvery Y-shaped mark on each of its forewings. Description The silver Y is a medium-sized moth with a wingspan of 30 to 45 mm. The wings are intricately patterned with various shades of brown and grey providing excellent camouflage. In the centre of each forewing there is a silver-coloured mark shaped like a lower case Y () or a lower case Greek letter gamma (). There are several different forms with varying colours depending on the climate in which the larvae grow. Technical description and variation P. gamma Forewing purplish grey, with darker suffusion in places; the lines pale silvery edged on both sides with dark fuscous, the outer line indented on vein 2 and submedian fold, as in circumflexa; the oblique orbicular and the reniform conversely oblique and constricted in middle, both edged with silvery: the median area below middle blackish, containing a silvery gamma; the subterminal dentate and indented, preceded by a darker shade; hindwing brownish grey with darker veins and a broad blackish terminal border: aberrations due to difference in ground colour are ab. pallida Tutt, in which the ground colour is whitish grey, with the markings appearing darker and more sharply defined; ab. rufescens Tutt where it is yellowish red, with the gamma mark pale golden, also the lines and edges of stigmata, and the whole underside reddish: and ab. nigricans Spul., in which the whole forewing up to the pale terminal space is violet black brown; in the ab. purpurissa ab. nov. (65a) the ground colour is deep olive brown; the inner and outer lines violet, the latter double; subterminal line lustrous violet, irregularly waved and below the middle forming a strong W-shaped mark; the gamma mark is pale golden, and the edges of the dark stigmata are, like the inner line, finely lustrous; a pale violet terminal stripe before termen; hindwing bronzy brownish, with broad dark terminal border. The example from which this description was made, now in the Tring Museum, was taken in Sussex, on the South Coast of England, and is referred to by Tutt in British Noctuae, Vol. IV, p. 32; lastly, the form gammina Stgr., from Syria and Pontus, is only half as large as typical gamma, with more definitely marked forewings. Larva pale green, with fine whitish or yellowish, partly double, lines; a straight yellowish lateral line above the white black-ringed spiracles. Distribution The species is widespread across Europe and over almost all the Palearctic including North Africa. It is resident in the south of its range and adults fly almost throughout the year. In spring variable numbers migrate north reaching as far as Iceland, Greenland, and Finland with huge invasions taking place in some years. A second wave of migrants arrives in the summer. In central Europe and the British Isles adults are present in significant numbers from May onwards with numbers dwindling in late autumn as they are killed off by frosts. Some individuals fly south again to winter around the Mediterranean and Black seas. It occurs in a wide variety of habitats, particularly open areas. It regularly visits gardens to take nectar from the flowers. Life history Silver Y moths can produce two or three generations in a year with a fourth generation when conditions are particularly good. The eggs are laid on the upper or lower surface of leaves. They are whitish in colour and hemispherical in shape with deep ribbing. They hatch after three to four days (longer in cool conditions). The larvae are about 30 mm long, have three pairs of prolegs and are usually green with whitish markings. They feed on a wide variety of low-growing plants and have been recorded on over 200 different species including crops such as the garden pea (Pisum sativum), sugar beet (Beta vulgaris) and cabbage (Brassica oleracea). They can reduce crop yields by damaging leaves and are often considered to be a pest. The pupa is green at first, gradually darkening to black. The adults mate one or two days after emerging from the pupa and start laying eggs one to five days later. They die three to nineteen days after emergence. Gallery References Sarah Brook Silver Y Autographa gamma Linnaeus Butterfly Conservation (retrieved 06/02/07) Robert C. Venette, Erica E. Davis, Holly Heisler, & Margaret Larson (2003) Mini Risk Assessment - Silver Y Moth, Autographa gamma (L.) (retrieved 29 MAR 2012) Paul Waring & Martin Townsend (2004) Field Guide to the Moths of Great Britain and Ireland, British Wildlife Publishing, Hampshire. External links Silver Y on UKmoths Funet Taxonomy Fauna Europaea Lepiforum.de Plusiini Agricultural pest insects Animal migration Moths described in 1758 Moths of Africa Moths of Asia Moths of Europe Taxa named by Carl Linnaeus Articles containing video clips
Silver Y
[ "Biology" ]
1,054
[ "Ethology", "Behavior", "Animal migration" ]
9,353,592
https://en.wikipedia.org/wiki/Sophomore%27s%20dream
In mathematics, the sophomore's dream is the pair of identities (especially the first) discovered in 1697 by Johann Bernoulli. The numerical values of these constants are approximately 1.291285997... and 0.7834305107..., respectively. The name "sophomore's dream" is in contrast to the name "freshman's dream" which is given to the incorrect identity The sophomore's dream has a similar too-good-to-be-true feel, but is true. Proof The proofs of the two identities are completely analogous, so only the proof of the second is presented here. The key ingredients of the proof are: to write (using the notation for the natural logarithm and for the exponential function); to expand using the power series for ; and to integrate termwise, using integration by substitution. In details, can be expanded as Therefore, By uniform convergence of the power series, one may interchange summation and integration to yield To evaluate the above integrals, one may change the variable in the integral via the substitution With this substitution, the bounds of integration are transformed to giving the identity By Euler's integral identity for the Gamma function, one has so that Summing these (and changing indexing so it starts at instead of ) yields the formula. Historical proof The original proof, given in Bernoulli, and presented in modernized form in Dunham, differs from the one above in how the termwise integral is computed, but is otherwise the same, omitting technical details to justify steps (such as termwise integration). Rather than integrating by substitution, yielding the Gamma function (which was not yet known), Bernoulli used integration by parts to iteratively compute these terms. The integration by parts proceeds as follows, varying the two exponents independently to obtain a recursion. An indefinite integral is computed initially, omitting the constant of integration both because this was done historically, and because it drops out when computing the definite integral. Integrating by substituting and yields: (also in the list of integrals of logarithmic functions). This reduces the power on the logarithm in the integrand by 1 (from to ) and thus one can compute the integral inductively, as where denotes the falling factorial; there is a finite sum because the induction stops at 0, since is an integer. In this case , and they are integers, so Integrating from 0 to 1, all the terms vanish except the last term at 1, which yields: This is equivalent to computing Euler's integral identity for the Gamma function on a different domain (corresponding to changing variables by substitution), as Euler's identity itself can also be computed via an analogous integration by parts. See also Series (mathematics) Notes References Formula OEIS, and Max R. P. Grossmann (2017): Sophomore's dream. 1,000,000 digits of the first constant Function Literature for x^x and Sophomore's Dream, Tetration Forum, 03/02/2010 The Coupled Exponential, Jay A. Fantini, Gilbert C. Kloepfer, 1998 Sophomore's Dream Function, Jean Jacquelin, 2010, 13 pp. Footnotes Integrals Mathematical constants
Sophomore's dream
[ "Mathematics" ]
681
[ "Mathematical constants", "Mathematical objects", "Numbers", "nan" ]
9,353,706
https://en.wikipedia.org/wiki/Spontaneous%20combustion
Spontaneous combustion or spontaneous ignition is a type of combustion which occurs by self-heating (increase in temperature due to exothermic internal reactions), followed by thermal runaway (self heating which rapidly accelerates to high temperatures) and finally, autoignition. It is distinct from (but has similar practical effects to) pyrophoricity, in which a compound needs no self-heat to ignite. The correct storage of spontaneously combustible materials is extremely important, as improper storage is the main cause of spontaneous combustion. Materials such as coal, cotton, hay, and oils should be stored at proper temperatures and moisture levels to prevent spontaneous combustion. Allegations of spontaneous human combustion are considered pseudoscience. Cause and ignition Spontaneous combustion can occur when a substance with a relatively low ignition temperature such as hay, straw, peat, etc., begins to release heat. This may occur in several ways, either by oxidation in the presence of moisture and air, or bacterial fermentation, which generates heat. These materials are thermal insulators that prevent the escape of heat causing the temperatures of the material to rise above its ignition point. Combustion will begin when a sufficient oxidizer, such as oxygen, and fuel are present to maintain the reaction into thermal runaway. Thermal runaway can occur when the amount of heat produced is greater than the rate at which the heat is lost. Materials that produce a lot of heat may combust in relatively small volumes, while materials that produce very little heat may only become dangerous when well insulated or stored in large volumes. Most oxidation reactions accelerate at higher temperatures, so a pile of material that would have been safe at a low ambient temperature may spontaneously combust during hotter weather. Affected materials Confirmed Hay and compost piles may self-ignite because of heat produced by bacterial fermentation, which then can cause pyrolysis and oxidation that leads to thermal runaway reactions that reach autoignition temperature. Rags soaked with drying oils or varnish can oxidize rapidly due to the large surface area, and even a small pile can produce enough heat to ignite under the right conditions. Coal can ignite spontaneously when exposed to oxygen, which causes it to react and heat up when there is insufficient ventilation for cooling. Pyrite oxidation is often the cause of coal's spontaneous ignition in old mine tailings. Pistachio nuts are highly flammable when stored in large quantities, and are prone to self-heating and spontaneous combustion. Large manure piles can spontaneously combust during conditions of extreme heat. Cotton and linen can ignite when they come into contact with polyunsaturated vegetable oils (linseed, massage oils); bacteria will slowly decompose the materials, producing heat. If these materials are stored in a way so the heat cannot escape, the heat buildup increases the rate of decomposition and thus the rate of heat buildup increases. Once ignition temperature is reached, combustion occurs with oxidizers present (oxygen). Nitrate film, when improperly stored, can deteriorate into an extremely flammable condition and combust. The 1937 Fox vault fire was caused by spontaneously combusting nitrate film. Hay Hay is one of the most widely studied materials in spontaneous combustion. It is very difficult to establish a unified theory of what occurs in hay self-heating because of the variation in the types of grass used in hay preparation, and the different locations where it is grown. It is anticipated that dangerous heating will occur in hay that contains more than 25% moisture. The largest number of fires occur within two to six weeks of storage, with the majority occurring in the fourth or fifth week. The process may begin with microbiological activity (bacteria or mold) which ferments the hay, creating ethanol. Ethanol has a flash point of . So with an ignition source such as static electricity, e.g. from a mouse running through the hay, combustion may occur. The temperature then increases, igniting the hay itself. Microbiological activity reduces the amount of oxygen available in the hay. At 100 °C, wet hay absorbed twice the amount of oxygen of dry hay. There has been conjecture that the complex carbohydrates present in hay break down to simpler sugars, which are more readily fermented to ethanol. Charcoal Charcoal, when freshly prepared, can self-heat and catch fire. This is separate from hot spots which may have developed from the preparation of charcoal. Charcoal that has been exposed to air for a period of eight days is not considered to be hazardous. There are many factors involved, among them the type of wood and the temperature at which the charcoal was prepared. Coal Extensive studies have been completed on the self-heating of coal. Improper storage of coal is a main cause of spontaneous combustion, as there can be a continuous oxygen supply and the oxidization of coal produces heat that doesn't dissipate. Over time, these conditions can cause self-heating. The tendency to self-heat decreases with the increasing rank of the coal. Lignite coals are more active than bituminous coals, which are more active than anthracite coals. Freshly mined coal consumes oxygen more rapidly than weathered coal, and freshly mined coal self-heats to a greater extent than weathered coal. The presence of water vapor may also be important, as the rate of heat generation accompanying the absorption of water in dry coal from saturated air can be an order of magnitude or more than the same amount of dry air. Cotton Cotton too can be at great risk of spontaneous combustion. In an experimental study on the spontaneous combustion of cotton, three different types of cotton were tested at different heating rates and pressures. Different cotton varieties can have different self-heating oxidation temperature and larger reactions. Understanding what type of cotton is being stored will help reduce the risk of spontaneous combustion. A striking example of a cargo igniting spontaneously occurred on the ship in the Indian Ocean on 24 August 1834. Oil seeds and oil-seed products Oil seeds and residue from oil extraction will self-heat if too moist. Typically, storage at 9–14% moisture is satisfactory, but limits are established for each individual variety of oil seed. In the presence of excess moisture that is just below the level required for germinating seed, the activity of mold fungi is a likely candidate for generating heat. This was established for flax and sunflower seeds, and soy beans. Many of the oil seeds generate oils that are self-heating. Palm kernels, rapeseed, and cotton seed have also been studied. Rags soaked in linseed oil can spontaneously ignite if improperly stored or discarded. Copra Copra, the dried, white flesh of the coconut from which coconut oil is extracted, has been classed with dangerous goods due to its spontaneously combustive nature. It is identified as a Division 4.2 substance. Human There have been unconfirmed anecdotal reports of people spontaneously combusting. This alleged phenomenon is not considered true spontaneous combustion, as supposed cases have been largely attributed to the wick effect, whereby an external source of fire ignites nearby flammable materials and human fat or other sources. Predictions and preventions There are many factors that can help predict spontaneous combustion and prevent it. The longer a material sits, the higher the risk of spontaneous combustion. Preventing spontaneous combustion can be as simple as not leaving materials stored for extended periods of time, controlling air flow, moisture, methane, and pressure balances. There are also many materials that prevent spontaneous combustion. For example, spontaneous coal combustion can be prevented by physical based materials such as chlorine salts, ammonium salts, alkalis, inert gases, colloids, polymers, aerosols, and LDHs, as well as chemical-based materials like antioxidants, ionic liquids, and composite materials. References Bibliography External links Article on the spontaneous combustion of coal, May 1993 Spontaneous combustion demonstration Combustion
Spontaneous combustion
[ "Chemistry" ]
1,613
[ "Combustion" ]
9,353,915
https://en.wikipedia.org/wiki/Colored%20dissolved%20organic%20matter
Colored dissolved organic matter (CDOM) is the optically measurable component of dissolved organic matter in water. Also known as chromophoric dissolved organic matter, yellow substance, and gelbstoff, CDOM occurs naturally in aquatic environments and is a complex mixture of many hundreds to thousands of individual, unique organic matter molecules, which are primarily leached from decaying detritus and organic matter. CDOM most strongly absorbs short wavelength light ranging from blue to ultraviolet, whereas pure water absorbs longer wavelength red light. Therefore, water with little or no CDOM, such as the open ocean, appears blue. Waters containing high amounts of CDOM can range from brown, as in many rivers, to yellow and yellow-brown in coastal waters. In general, CDOM concentrations are much higher in fresh waters and estuaries than in the open ocean, though concentrations are highly variable, as is the estimated contribution of CDOM to the total dissolved organic matter pool. Significance The concentration of CDOM can have a significant effect on biological activity in aquatic systems. CDOM diminishes light intensity as it penetrates water. Very high concentrations of CDOM can have a limiting effect on photosynthesis and inhibit the growth of phytoplankton, which form the basis of oceanic food chains and are a primary source of atmospheric oxygen. However, the influence of CDOM on algal photosynthesis can be complex in other aquatic systems like lakes where CDOM increases photosynthetic rates at low and moderate concentrations, but decreases photosynthetic rates at high concentrations. CDOM concentrations reflect hierarchical controls. Concentrations vary among lakes in close proximity due to differences in lake and watershed morphometry, and regionally because of difference in climate and dominant vegetation. CDOM also absorbs harmful UVA/B radiation, protecting organisms from DNA damage. Absorption of UV radiation causes CDOM to "bleach", reducing its optical density and absorptive capacity. This bleaching (photodegradation) of CDOM produces low-molecular-weight organic compounds which may be utilized by microbes, release nutrients that may be used by phytoplankton as a nutrient source for growth, and generates reactive oxygen species, which may damage tissues and alter the bioavailability of limiting trace metals. CDOM can be detected and measured from space using satellite remote sensing and often interferes with the use of satellite spectrometers to remotely estimate phytoplankton populations. As a pigment necessary for photosynthesis, chlorophyll is a key indicator of the phytoplankton abundance. However, CDOM and chlorophyll both absorb light in the same spectral range so it is often difficult to differentiate between the two. Although variations in CDOM are primarily the result of natural processes including changes in the amount and frequency of precipitation, human activities such as logging, agriculture, effluent discharge, and wetland drainage can affect CDOM levels in fresh water and estuarine systems. Measurement Traditional methods of measuring CDOM include UV-visible spectroscopy (absorbance) and fluorometry (fluorescence). Optical proxies have been developed to characterize sources and properties of CDOM, including specific ultraviolet absorbance at 254 nm (SUVA254) and spectral slopes for absorbance, and the fluorescence index (FI), biological index (BIX), and humification index (HIX) for fluorescence. Excitation emission matrices (EEMs) can be resolved into components in a technique called parallel factor analysis (PARAFAC), where each component is often labelled as "humic-like", "protein-like", etc. As mentioned above, remote sensing is the newest technique to detect CDOM from space. See also Blackwater river Color of water Dissolved organic carbon (DOC) Ocean turbidity Secchi disk References External links The Color of the Ocean from science@NASA Aquatic ecology Chemical oceanography Environmental chemistry Organic chemistry Water chemistry Water quality indicators Water supply
Colored dissolved organic matter
[ "Chemistry", "Engineering", "Biology", "Environmental_science" ]
828
[ "Hydrology", "Environmental chemistry", "Water pollution", "Chemical oceanography", "Water quality indicators", "Ecosystems", "nan", "Environmental engineering", "Aquatic ecology", "Water supply" ]
9,354,293
https://en.wikipedia.org/wiki/Pasteur%20effect
The Pasteur effect describes how available oxygen inhibits ethanol fermentation, driving yeast to switch toward aerobic respiration for increased generation of the energy carrier adenosine triphosphate (ATP). More generally, in the medical literature, the Pasteur effect refers to how the cellular presence of oxygen causes in cells a decrease in the rate of glycolysis and also a suppression of lactate accumulation. The effect occurs in animal tissues, as well as in microorganisms belonging to the fungal kingdom. Discovery The effect was described by Louis Pasteur in 1857 in experiments showing that aeration of yeasted broth causes cell growth to increase while the fermentation rate decreases, based on lowered ethanol production. Explanation Yeast fungi, being facultative anaerobes, can either produce energy through ethanol fermentation or aerobic respiration. When the O2 concentration is low, the two pyruvate molecules formed through glycolysis are each fermented into ethanol and carbon dioxide. While only 2 ATP are produced per glucose, this method is utilized under anaerobic conditions because it oxidizes the electron shuttle NADH into NAD+ for another round of glycolysis and ethanol fermentation. If the concentration of oxygen increases, pyruvate is instead converted to acetyl CoA, used in the citric acid cycle, and undergoes oxidative phosphorylation. Per glucose, 10 NADH and 2 FADH2 are produced in cellular respiration for a significant amount of proton pumping to produce a proton gradient utilized by ATP Synthase. While the exact ATP output ranges based on considerations like the overall electrochemical gradient, aerobic respiration produces far more ATP than the anaerobic process of ethanol fermentation. The increased ATP and citrate from aerobic respiration allosterically inhibit the glycolysis enzyme phosphofructokinase 1 because less pyruvate is needed to produce the same amount of ATP. Despite this energetic incentive, Rosario Lagunas has shown that yeast continue to partially ferment available glucose into ethanol for many reasons. First, glucose metabolism is faster through ethanol fermentation because it involves fewer enzymes and limits all reactions to the cytoplasm. Second, ethanol has bactericidal activity by causing damage to the cell membrane and protein denaturing, allowing yeast fungus to outcompete environmental bacteria for resources. Third, partial fermentation may be a defense mechanism against environmental competitors depleting all oxygen faster than the yeast's regulatory systems could fully switch from aerobic respiration to ethanol fermentation. Practical implications The fermentation processes used in alcohol production is commonly maintained in low oxygen conditions, under a blanket of carbon dioxide, while growing yeast for biomass involves aerating the broth for maximized energy production. Despite the bactericidal effects of ethanol, acidifying effects of fermentation, and low oxygen conditions of industrial alcohol production, bacteria that undergo lactic acid fermentation can contaminate such facilities because lactic acid has a low pKa of 3.86 to avoid decoupling the pH membrane gradient that supports regulated transport. See also Ethanol fermentation Fermentation (biochemistry) Facultative anaerobic organism Allosteric regulation References Further reading Fermentation Metabolism
Pasteur effect
[ "Chemistry", "Biology" ]
683
[ "Cellular respiration", "Cellular processes", "Biochemistry", "Metabolism", "Fermentation" ]
9,354,534
https://en.wikipedia.org/wiki/Wollaston%20wire
Wollaston wire is a very fine (c. 0.001 mm thick) platinum wire clad in silver and used in electrical instruments. For most uses, the silver cladding is etched away by acid to expose the platinum core. History The wire is named after its inventor, William Hyde Wollaston, who first produced it in England in the early 19th century. Platinum wire is drawn through successively smaller dies until it is about in diameter. It is then embedded in the middle of a silver wire having a diameter of about . This composite wire is then drawn until the silver wire has a diameter of about , causing the embedded platinum wire to be reduced by the same 50:1 ratio to a final diameter of . Removal of the silver coating with an acid bath leaves the fine platinum wire as a product of the process. Uses Wollaston wire was used in early radio detectors known as electrolytic detectors and the hot wire barretter. Other uses include suspension of delicate devices, sensing of temperature, and sensitive electrical power measurements. It continues to be used for the fastest-responding hot-wire anemometers. References History of radio Radio electronics Wire
Wollaston wire
[ "Engineering" ]
233
[ "Radio electronics" ]
9,354,894
https://en.wikipedia.org/wiki/Jef%20Poskanzer
Jeffrey A. Poskanzer is a computer programmer. He was the first person to post a weekly FAQ to Usenet. He developed the portable pixmap file format and pbmplus (the precursor to the Netpbm package) to manipulate it. He has also worked on the team that ported A/UX. He has shared in two USENIX Lifetime Achievement Awards – in 1993 for Berkeley Unix, and in 1996 for the Software Tools Project. He owns the Internet address acme.com (which is notable for receiving over one million e-mail spams a day), which is the home page for ACME Laboratories. It hosts a number of open source software projects; major projects maintained include both pbmplus and thttpd, an open source web server. Notes External links ACME Laboratories Jef Poskanzer's Resumé A/UX people Living people Year of birth missing (living people)
Jef Poskanzer
[ "Technology" ]
196
[ "Computing stubs", "Computer specialist stubs" ]
9,355,054
https://en.wikipedia.org/wiki/Malaria%20antigen%20detection%20tests
Malaria antigen detection tests are a group of commercially available rapid diagnostic tests of the rapid antigen test type that allow quick diagnosis of malaria by people who are not otherwise skilled in traditional laboratory techniques for diagnosing malaria or in situations where such equipment is not available. There are currently over 20 such tests commercially available (WHO product testing 2008). The first malaria antigen suitable as target for such a test was a soluble glycolytic enzyme Glutamate dehydrogenase. None of the rapid tests are currently as sensitive as a thick blood film, nor as cheap. A major drawback in the use of all current dipstick methods is that the result is essentially qualitative. In many endemic areas of tropical Africa, however, the quantitative assessment of parasitaemia is important, as a large percentage of the population will test positive in any qualitative assay. Antigen-based Malaria Rapid Diagnostic Tests Malaria is a curable disease if the patients have access to early diagnosis and prompt treatment. Antigen-based rapid diagnostic tests (RDTs) have an important role at the periphery of health services capability because many rural clinics do not have the ability to diagnose malaria on-site due to a lack of microscopes and trained technicians to evaluate blood films. Furthermore, in regions where the disease is not endemic, laboratory technologists have very limited experience in detecting and identifying malaria parasites. An ever increasing numbers of travelers from temperate areas each year visit tropical countries and many of them return with a malaria infection. The RDT tests are still regarded as complements to conventional microscopy but with some improvements it may well replace the microscope. The tests are simple and the procedure can be performed on the spot in field conditions. These tests use finger-stick or venous blood, the completed test takes a total of 15–20 minutes, and a laboratory is not needed. The threshold of detection by these rapid diagnostic tests is in the range of 100 parasites/μL of blood compared to 5 by thick film microscopy. pGluDH An accurate diagnosis is becoming more and more important, in view of the increasing resistance of Plasmodium falciparum and the high price of alternatives to chloroquine. The enzyme pGluDH does not occur in the host red blood cell and was recommended as a marker enzyme for Plasmodium species by Picard-Maureau et al. in 1975. The malaria marker enzyme test is suitable for routine work and is now a standard test in most departments dealing with malaria. Presence of pGluDH is known to represent parasite viability and a rapid diagnostic test using pGluDH as antigen would have the ability to differentiate live from dead organisms. A complete RDT with pGluDH as antigen has been developed in China and is now undergoing clinical trials. GluDHs are ubiquitous enzymes that occupy an important branch-point between carbon and nitrogen metabolism. Both nicotinamide adenine dinucleotide (NAD) [EC 1.4.1.2] and nicotinamide adenine dinucleotide phosphate (NADP) dependent GluDH [EC 1.4.1.4] enzymes are present in Plasmodia; the NAD-dependent GluDH is relatively unstable and not useful for diagnostic purposes. Glutamate dehydrogenase provides an oxidizable carbon source used for the production of energy as well as a reduced electron carrier, NADH. Glutamate is a principal amino donor to other amino acids in subsequent transamination reactions. The multiple roles of glutamate in nitrogen balance make it a gateway between free ammonia and the amino groups of most amino acids. Its crystal structure is published. The GluDH activity in P.vivax, P.ovale and P. malariae has never been tested, but given the importance of GluDH as a branch point enzyme, every cell must have a high concentration of GluDH. It is well known that enzymes with a high molecular weight (like GluDH) have many isozymes, which allows strain differentiations (given the right monoclonal antibody). The host produces antibodies against the parasitic enzyme indicating a low sequence identity. Histidine rich protein II The histidine-rich protein II (HRP II) is a histidine- and alanine-rich, water-soluble protein, which is localized in several cell compartments including the parasite cytoplasm. The antigen is expressed only by P. falciparum trophozoites. HRP II from P. falciparum has been implicated in the biocrystallization of hemozoin, an inert, crystalline form of ferriprotoporphyrin IX (Fe(3+)-PPIX) produced by the parasite. A substantial amount of the HRP II is secreted by the parasite into the host bloodstream and the antigen can be detected in erythrocytes, serum, plasma, cerebrospinal fluid and even urine as a secreted water-soluble protein. These antigens persist in the circulating blood after the parasitaemia has cleared or has been greatly reduced. It generally takes around two weeks after successful treatment for HRP2-based tests to turn negative, but may take as long as one month, which compromises their value in the detection of active infection. False positive dipstick results were reported in patients with rheumatoid-factor-positive rheumatoid arthritis. Since HRP-2 is expressed only by P. falciparum, these tests will give negative results with samples containing only P. vivax, P. ovale, or P. malariae; many cases of non-falciparum malaria may therefore be misdiagnosed as malaria negative (some P.falciparum strains also don't have HRP II). The variability in the results of pHRP2-based RDTs is related to the variability in the target antigen. pLDH P. falciparum lactate dehydrogenase (PfLDH) is a 33 kDa oxidoreductase [EC 1.1.1.27]. It is the last enzyme of the glycolytic pathway, essential for ATP generation and one of the most abundant enzymes expressed by P. falciparum. Plasmodium LDH (pLDH) from P. vivax, P. malariae, and P. ovale) exhibit 90-92% identity to PfLDH from P. falciparum. pLDH levels have been seen to reduce in the blood sooner after treatment than HRP2. In this respect, pLDH is similar to pGluDH. Nevertheless, the kinetic properties and sensitivities to inhibitors targeted to the cofactor binding site differ significantly and are identifiable by measuring dissociation constants for inhibitors which, differ by up to 21-fold. pAldo Fructose-bisphosphate aldolase [EC 4.1.2.13] catalyzes a key reaction in glycolysis and energy production and is produced by all four species. The P.falciparum aldolase is a 41 kDa protein and has 61-68% sequence similarity to known eukaryotic aldolases. Its crystal structure has been published. The presence of antibodies against p41 in the sera of human adults partially immune to malaria suggest that p41 is implicated in protective immune response against the parasite. See also Romanowsky stain References External links Malaria Antibodies Roll back malaria WHO product testing 2008 WHO Rapid Diagnostic Tests (RDTs) Malaria Blood tests
Malaria antigen detection tests
[ "Chemistry" ]
1,570
[ "Blood tests", "Chemical pathology" ]
9,355,428
https://en.wikipedia.org/wiki/Wentwood
Wentwood (), in Monmouthshire, South Wales, is a forested area of hills, rising to above sea level. It is located to the northeast of, and partly within the boundaries of, the city of Newport. Geology Wentwood is underlain by sandstones which are assigned to the Brownstones Formation of the Old Red Sandstone, a suite of sedimentary rocks laid down during the Devonian period. The beds dip gently to moderately in a south-easterly direction. It is the southernmost part of a range of hills formed by the relatively hard-wearing Brownstones sandstones which stretch in a rough arc northwards through eastern Monmouthshire, the broadly west-facing scarps of which are generally well wooded. Wentwood hamlet There is a small number of houses in Wentwood, known as Wentwood hamlet. Gilgal Chapel is a restored church in Wentwood. Ancient woodland It is the largest ancient woodland in Wales and the ninth largest in the UK. The current wooded area is a remnant of a much larger ancient forest which once extended between the rivers Usk and Wye and which divided the old kingdom of Gwent into two – Gwent Uwchcoed and Iscoed, that is, above and below the wood. Prehistory The area contains Bronze Age burial mounds, a stone circle, and a megalithic alignment on Gray Hill, Monmouthshire. Middle Ages In the Middle Ages, the woods belonged to the lordship of Chepstow and provided hunting preserves, and timber, fuel and pasturage for the tenants of nearby manors. The Royal Forest of Wentwood had its own forest laws and courts were held twice yearly at Forester's Oaks, above Wentwood Reservoir. These courts tried luckless locals charged with a range of crimes within the forest boundaries, from sheep stealing to poaching deer. These crimes were taken so seriously that culprits were hanged from one of the two Forester's Oaks. The last offender dealt with in this severe way was hanged as recently as 1829. Later history The edges of the wood were gradually cleared and felled away in the 16th century and 17th century by farmers. In 1678 Wentwood was the scene of riots led by Nathan Rogers and Edward Kemys against the actions of Henry Somerset, 1st Duke of Beaufort who, as Lord Lieutenant of Monmouthshire and Governor of Chepstow Castle, enclosed some of the forest for his own use, and began to fell trees for use in his ironworks at Tintern. The tenants of the area, including Rogers, claimed that the ancient rights to the forest belonged to them, and rioted when 50 of Somerset's armed men arrived to carry away the felled wood. Many stands of substantial mature Welsh Oaks were felled to meet the demand for stout oak heartwoods in Royal Navy battleships and men o' war of the Napoleonic era of the 19th century, such as HMS Victory and others, but the heart of the forest remained preserved for charcoal production, a necessity for the iron industry and local ironworks. Henry Somerset sold 2,244 trees in Wentwood Forest, described as "the largest wood in England", in May 1902. The first conifer plantations were planted at Wentwood in 1880, and most of the native trees were felled during World War I to provide timber for props and supports for the trenches. When the area was replanted by the Forestry Commission in the 1950s and 1960s, the original broadleaved deciduous trees were largely replaced with non-native conifers, damaging the woodland habitat. More recently, broadleaved trees have been allowed to grow back. Recreation area Wentwood and its surrounding areas are popular with hillwalking and mountain biking enthusiasts and the Wentwood Reservoir, opened in 1904, was a centre for trout fishing prior to being drained in 2019 for refurbishment works by utility company Dŵyr Cymru (Welsh Water). The reservoir is slowly being refilled by natural water capture, which is expected to take around 12 months. The area is also home to thousands of wildlife species. These include 75 species of bird, including turtle doves, nightjars and spotted flycatchers; dormice; Eurasian otters; pipistrelle bats; and ancient woodland plants, such as wild daffodil, wood sorrel, and yellow pimpernel. In 2006, the Woodland Trust completed the purchase of some 352 hectares (nearly 900 acres) of Wentwood after a high-profile campaign, and plans a programme of conservation and restoration. In April 2007, an illegal rave event took place in Wentwood, with around 3,000 people before it was broken up. Vehicle access to much of the site is restricted, to protect the ancient monuments. Despite this, off-road vehicles have regularly caused problems, culminating in damage to one of the prehistoric burial mounds over the Christmas holidays of 2019. References External links Woodland Trust site with more information www.geograph.co.uk : photos of Wentwood and surrounding areas Forests and woodlands of Monmouthshire Districts of Newport, Wales Forests and woodlands of Newport, Wales Landmarks in Newport, Wales Tourist attractions in Newport, Wales Mountains and hills of Monmouthshire Marilyns of Wales Old-growth forests
Wentwood
[ "Biology" ]
1,054
[ "Old-growth forests", "Ecosystems" ]
9,355,858
https://en.wikipedia.org/wiki/British%20Columbia%20Energy%20Regulator
The British Columbia Energy Regulator (BCER), formerly the BC Oil and Gas Commission, is the Crown Corporation responsible for energy regulation in British Columbia, Canada. Established in October 1998, it has offices in seven cities: Fort St. John, Fort Nelson, Kelowna, Victoria, Terrace, Dawson Creek, and Prince George. Purpose The BCER is defined under the Energy Resource Activities Act. Under this law, the BCER's purposes are to 'regulate energy resource activities in a manner that protects public safety and the environment, support reconciliation with Indigenous peoples and the transition to low-carbon energy, conserve energy resources and foster a sound economy and social well-being.' The BCER's mandate does not extend to regulating consumer gas prices at the pump. ERAA defines ‘energy resource’ as petroleum, natural gas, hydrogen, methanol or ammonia. The BCER also manages oil and aspects of geothermal resources, with an expanded role in carbon capture and storage (CCS). Governance The BCER is accountable to the provincial legislature and the public through the Ministry of Energy, Mines and Low Carbon Innovation. The BCER is governed by a Board of Directors responsible for providing oversight of the BCER and its operations. The Board consists of five to seven directors and includes Indigenous representation. The current strategic and operational leader of the BCER is Commissioner and Chief Executive Officer Michelle Carr. Compliance and enforcement information In 2010–11, the OGC "issued 15 penalty tickets with fines of $575 (the maximum allowed for tickets) or less, which included unlawful water withdrawals and failure to promptly report a spill. Court prosecutions included a $20,000 fine for a Water Act stream violation, $10,575 for another stream violation and $250,0000 for a sour gas release. [...] The commission would not release the names of the companies convicted". Per the OGC, in 2012, of "more than 800 deficiencies, 80 resulted in charges, largely under the provincial Water Act for the non-reporting of water volumes and a smaller portion under the provincial Environment Management Act. Another 13 resulted in orders under the provincial Oil and Gas Activities Act, 22 in warnings, 76 in letters requiring action and three in referrals to other agencies". Paul Jeakins, OGC commissioner and CEO, has publicly acknowledged that OGC inspection and enforcement reports are "a bit of a gap". Lawsuits In November 2013, Ecojustice, the Sierra Club and the Wilderness Committee filed a lawsuit against the OGC and Encana about Encana's water use from lakes and rivers for its hydraulic fracturing for shale gas, "granted by repeated short-term water permits, a violation of the provincial water act". In 2012, the OGC had granted Encana access to 20.4 million cubic metres of surface water, 7 million of which were for fracking and 54% of that were through short-term approvals. In October 2014 the Supreme Court of British Columbia found no violation and dismissed the case. Criticism The agency has been criticized to be "too industry-friendly", to have "vague regulations" and to issue non-transparent fracking violation reports. However, the BCER does publicly identify companies convicted of fracking violations online. The B.C. Ministry of Environment and other Crown corporations of B.C. like WorkSafeBC have reported company names and details of those penalties for years. BCER reports have been available online since 2009. See also History of the petroleum industry in Canada (frontier exploration and development) Greater Sierra (oil field) in B.C. List of Canadian natural gas pipelines List of Canadian oil pipelines List of Canadian pipeline accidents List of oil spills List of largest oil and gas companies by revenue Oil and gas law in the United States Petroleum fiscal regime Pipeline and Hazardous Materials Safety Administration- US counterpart regarding pipelines References External links BC Energy Regulator - official site Crown corporations of British Columbia Energy in British Columbia Energy regulatory authorities of Canada Energy-related government agencies of Canada Petroleum organizations Petroleum industry in British Columbia Hydraulic fracturing
British Columbia Energy Regulator
[ "Chemistry", "Engineering" ]
835
[ "Petroleum technology", "Energy organizations", "Petroleum", "Petroleum organizations", "Natural gas technology", "Hydraulic fracturing" ]
9,356,047
https://en.wikipedia.org/wiki/Jackson%27s%20inequality
In approximation theory, Jackson's inequality is an inequality bounding the value of function's best approximation by algebraic or trigonometric polynomials in terms of the modulus of continuity or modulus of smoothness of the function or of its derivatives. Informally speaking, the smoother the function is, the better it can be approximated by polynomials. Statement: trigonometric polynomials For trigonometric polynomials, the following was proved by Dunham Jackson: Theorem 1: If is an times differentiable periodic function such that then, for every positive integer , there exists a trigonometric polynomial of degree at most such that where depends only on . The Akhiezer–Krein–Favard theorem gives the sharp value of (called the Akhiezer–Krein–Favard constant): Jackson also proved the following generalisation of Theorem 1: Theorem 2: One can find a trigonometric polynomial of degree such that where denotes the modulus of continuity of function with the step An even more general result of four authors can be formulated as the following Jackson theorem. Theorem 3: For every natural number , if is -periodic continuous function, there exists a trigonometric polynomial of degree such that where constant depends on and is the -th order modulus of smoothness. For this result was proved by Dunham Jackson. Antoni Zygmund proved the inequality in the case when in 1945. Naum Akhiezer proved the theorem in the case in 1956. For this result was established by Sergey Stechkin in 1967. Further remarks Generalisations and extensions are called Jackson-type theorems. A converse to Jackson's inequality is given by Bernstein's theorem. See also constructive function theory. References External links Approximation theory Inequalities Theorems in approximation theory
Jackson's inequality
[ "Mathematics" ]
364
[ "Theorems in mathematical analysis", "Mathematical analysis", "Mathematical theorems", "Mathematical analysis stubs", "Approximation theory", "Binary relations", "Theorems in approximation theory", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Approximations" ...
9,356,096
https://en.wikipedia.org/wiki/Polymeric%20liquid%20crystal
Polymeric liquid crystals are similar to monomeric liquid crystals used in displays. Both have dielectric anitroscopy, or the ability to change directions and absorb or transmit light depending on electric fields. Polymeric liquid crystals form long head-to-tail or side chain polymers, which are woven in thick mats and therefore have high viscosities. The high viscosities allow the polymeric liquid crystals to be used in complex structures, but they are harder to align, limiting their usefulness. The polymerics align in microdomains facing all different directions, which ruins the optical effect. One solution to this is to mix in a small amount of photo-curing polymer, which when spin-coated onto a surface can be hardened. Basically, the polymeric liquid crystal and photocurer are aligned in one direction, and then the photo curer is cured, "freezing" the polymeric in one direction. References Liquid crystals
Polymeric liquid crystal
[ "Physics", "Materials_science" ]
192
[ "Materials science stubs", "Condensed matter stubs", "Condensed matter physics" ]
9,357,386
https://en.wikipedia.org/wiki/Wake%20Forest%20Institute%20for%20Regenerative%20Medicine
The Wake Forest Institute for Regenerative Medicine (WFIRM) is a research institute affiliated with Wake Forest School of Medicine and located in Winston-Salem, North Carolina, United States. WFIRM's goal is to apply the principles of regenerative medicine to repair or replace diseased tissues and organs. Among other goals, WFIRM scientists are looking for ways to create insulin-producing cells in the laboratory, engineered blood vessels for heart bypass surgery and treat knee injuries through regenerated meniscus tissues. WFIRM has also led two federal initiatives to regenerate tissues from battlefield injuries (AFIRM I and AFIRM II), with a combined funding of $160 million from the U.S. Department of Defense. WFIRM is working to develop more than 40 different organs and tissues in the laboratory. Anthony Atala, M.D., is the director of the institute, which is located in Wake Forest Innovation Quarter in downtown Winston-Salem. Atala was recruited by Wake Forest Baptist Medical Center in 2004, and brought many of his team members from the Laboratory for Tissue Engineering and Cellular Therapeutics at the Children's Hospital Boston and Harvard Medical School. Notable achievements announced at WFIRM have been the first lab-grown organ, a urinary bladder. The artificial urinary bladder was the first to be implanted into a human. WFIRM research also discovered stem cells harvested from the amniotic fluid of pregnant women. These stems cells are pluripotent, meaning that they can be manipulated to differentiate into various types of mature cells that make up nerve, muscle, bone, and other tissues while avoiding the problems of tumor formation and ethical concerns that are associated with embryonic stem cells. Research at WFIRM was also essential towards developing the field of bioprinting. This was first accomplished by converting a Hewlett Packard paper and ink printer to deposit cells, which is now on display at the National Museum of Health and Medicine. Later, the more advanced Integrated Tissue-Organ Printer (ITOP) was developed at the institute. In 2019, the U.S. federal Department of Health and Human Services (HHS) provided a 5-year grant through BARDA to support further development of WFIRM technology to better understand damage to the body caused by inhaling chlorine gas. The technology is called "lung-on-a-chip" and is a part of a "miniaturized system of human organs" developed by WFIRM that can allow researchers to create models of the body's response to harmful agents. The Institute also is involved in research on energy fields and the human biofield. This led to a retracted article on Energy Medicine. References External links Wake Forest Institute for Regenerative Medicine Wake Forest University research institutes Economy of Winston-Salem, North Carolina Buildings and structures in Winston-Salem, North Carolina 2006 establishments in North Carolina Life sciences industry
Wake Forest Institute for Regenerative Medicine
[ "Biology" ]
586
[ "Life sciences industry" ]
9,357,617
https://en.wikipedia.org/wiki/Human%E2%80%93Computer%20Interaction%20Institute
The Human–Computer Interaction Institute (HCII) is a department within the School of Computer Science at Carnegie Mellon University (CMU) in Pittsburgh, Pennsylvania. It is considered one of the leading centers of human–computer interaction research, and was named one of the top ten most innovative schools in information technology by Computer World in 2008. For the past three decades, the institute has been the predominant publishing force at leading HCI venues, most notably ACM CHI, where it regularly contributes more than 10% of the papers. Research at the institute aims to understand and create technology that harmonizes with and improves human capabilities by integrating aspects of computer science, design, social science, and learning science. HCII offers Human Computer Interaction (HCI) as an additional major for undergraduates, as well as a master's degree and PhDs in HCI. Students from various academic backgrounds come together from around the world to participate in this program. Students hold undergraduate degrees in psychology, design, and computer science, as well as many others. Students enter the program at various stages in their academic and professional careers. HCII research and educational programs span a full cycle of knowledge creation. The cycle includes research on how people work, play, and communicate within groups, organizations, and social structures. It includes the design, creation, and evaluation of technologies and tools to support human and social activities. Academics The institution offers degrees in undergraduate, graduate and doctoral studies. Notable faculty Randy Pausch was a professor of computer science, human-computer interaction and design. Pausch was also a best-selling author, who became known around the world after he gave "The Last Lecture" speech on September 18, 2007, at Carnegie Mellon. Pausch was instrumental in the development of Alice, a computer teaching tool. He also co-founded Carnegie Mellon's Entertainment Technology Center. Randy Pausch died on July 25, 2008. Jodi Forlizzi has been a faculty member with the department since 2000. She specializes interaction design and received a self-defined Ph.D. in human computer interaction and design at Carnegie Mellon University in 2007. She has a background of fine arts with a bachelor's degree in Illustration from University of the Arts. She is a member of the Association for Computing Machinery’s CHI Academy and the Walter Reed Army Medical Center has honored her for excellence in human-robot interaction design research. Chris Harrison is a professor at and director of the Future Interfaces Group within the Human–Computer Interaction Institute. He has previously conducted research at AT&T Labs, Microsoft Research, IBM Research and Disney Research. He is known for his pioneering work on scratch input and for developing Skinput and Omnitouch. He is also the CTO and co-founder of Qeexo, a machine learning and interaction technology startup. Robert Kraut is a Herbert A. Simon Professor of Human–Computer Interaction. His interests lie with social computing, design, and information technology. In 2016 he received the Carnegie Mellon School of Computer Science – SCS Allen Newell Research Award for his research on "Designing Online Communities." Jessica Hammer is an Associate Professor of Learning Sciences and the Director of the Center for Transformational Play. She has a joint appointment with the Entertainment Technology Center. Hammer researches the psychology of games. She also started the OHLab, along with Amy Ogan and associated students, staff, and colleagues. Kenneth Koedinger is a Professor of human–computer interaction and psychology. He is well known for his research with intelligent tutoring systems and cognitive tutors. He is a leader in the Learning Sciences and Educational Technology communities with many publications and grants in these areas. Vincent Aleven is a Professor of human–computer interaction. His research is in the area intelligent tutoring systems and the Learning Sciences. Aleven has been a co-editor of the International Journal for Artificial Intelligence in Education for many years. Bruce M. McLaren is a Professor of human–computer interaction. His research is in the area educational games, intelligent tutoring systems and the Learning Sciences. He is a former President of the International Artificial Intelligence in Education Society (2017-2019). Amy Ogan is an Associate Professor at the HCII department with interests in emerging technologies for education. She graduated from Carnegie Mellon two times, first as undergraduate with degrees in Spanish, Computer Science, and Human–Computer Interaction, second with a doctoral degree in Human–Computer Interaction. She is a recipient of the Jacobs Foundation Research Fellowship due to her interest in youth education and development. Research Some fields in which notable research is currently being done at the HCII are Learning Technologies, Tools and Technology, Human Assistance, Robotics, Arts and Entertainment, and the Entertainment Technology Center (ETC). References Educational institutions established in 1993 Schools and departments of Carnegie Mellon Human–computer interaction 1993 establishments in Pennsylvania
Human–Computer Interaction Institute
[ "Engineering" ]
974
[ "Human–computer interaction", "Human–machine interaction" ]
9,357,898
https://en.wikipedia.org/wiki/Constraint%20%28computational%20chemistry%29
In computational chemistry, a constraint algorithm is a method for satisfying the Newtonian motion of a rigid body which consists of mass points. A restraint algorithm is used to ensure that the distance between mass points is maintained. The general steps involved are: (i) choose novel unconstrained coordinates (internal coordinates), (ii) introduce explicit constraint forces, (iii) minimize constraint forces implicitly by the technique of Lagrange multipliers or projection methods. Constraint algorithms are often applied to molecular dynamics simulations. Although such simulations are sometimes performed using internal coordinates that automatically satisfy the bond-length, bond-angle and torsion-angle constraints, simulations may also be performed using explicit or implicit constraint forces for these three constraints. However, explicit constraint forces give rise to inefficiency; more computational power is required to get a trajectory of a given length. Therefore, internal coordinates and implicit-force constraint solvers are generally preferred. Constraint algorithms achieve computational efficiency by neglecting motion along some degrees of freedom. For instance, in atomistic molecular dynamics, typically the length of covalent bonds to hydrogen are constrained; however, constraint algorithms should not be used if vibrations along these degrees of freedom are important for the phenomenon being studied. Mathematical background The motion of a set of N particles can be described by a set of second-order ordinary differential equations, Newton's second law, which can be written in matrix form where M is a mass matrix and q is the vector of generalized coordinates that describe the particles' positions. For example, the vector q may be a 3N Cartesian coordinates of the particle positions rk, where k runs from 1 to N; in the absence of constraints, M would be the 3Nx3N diagonal square matrix of the particle masses. The vector f represents the generalized forces and the scalar V(q) represents the potential energy, both of which are functions of the generalized coordinates q. If M constraints are present, the coordinates must also satisfy M time-independent algebraic equations where the index j runs from 1 to M. For brevity, these functions gi are grouped into an M-dimensional vector g below. The task is to solve the combined set of differential-algebraic (DAE) equations, instead of just the ordinary differential equations (ODE) of Newton's second law. This problem was studied in detail by Joseph Louis Lagrange, who laid out most of the methods for solving it. The simplest approach is to define new generalized coordinates that are unconstrained; this approach eliminates the algebraic equations and reduces the problem once again to solving an ordinary differential equation. Such an approach is used, for example, in describing the motion of a rigid body; the position and orientation of a rigid body can be described by six independent, unconstrained coordinates, rather than describing the positions of the particles that make it up and the constraints among them that maintain their relative distances. The drawback of this approach is that the equations may become unwieldy and complex; for example, the mass matrix M may become non-diagonal and depend on the generalized coordinates. A second approach is to introduce explicit forces that work to maintain the constraint; for example, one could introduce strong spring forces that enforce the distances among mass points within a "rigid" body. The two difficulties of this approach are that the constraints are not satisfied exactly, and the strong forces may require very short time-steps, making simulations inefficient computationally. A third approach is to use a method such as Lagrange multipliers or projection to the constraint manifold to determine the coordinate adjustments necessary to satisfy the constraints. Finally, there are various hybrid approaches in which different sets of constraints are satisfied by different methods, e.g., internal coordinates, explicit forces and implicit-force solutions. Internal coordinate methods The simplest approach to satisfying constraints in energy minimization and molecular dynamics is to represent the mechanical system in so-called internal coordinates corresponding to unconstrained independent degrees of freedom of the system. For example, the dihedral angles of a protein are an independent set of coordinates that specify the positions of all the atoms without requiring any constraints. The difficulty of such internal-coordinate approaches is twofold: the Newtonian equations of motion become much more complex and the internal coordinates may be difficult to define for cyclic systems of constraints, e.g., in ring puckering or when a protein has a disulfide bond. The original methods for efficient recursive energy minimization in internal coordinates were developed by Gō and coworkers. Efficient recursive, internal-coordinate constraint solvers were extended to molecular dynamics. Analogous methods were applied later to other systems. Lagrange multiplier-based methods In most of molecular dynamics simulations that use constraint algorithms, constraints are enforced using the method of Lagrange multipliers. Given a set of n linear (holonomic) constraints at the time t, where and are the positions of the two particles involved in the kth constraint at the time t and is the prescribed inter-particle distance. The forces due to these constraints are added in the equations of motion, resulting in, for each of the N particles in the system Adding the constraint forces does not change the total energy, as the net work done by the constraint forces (taken over the set of particles that the constraints act on) is zero. Note that the sign on is arbitrary and some references have an opposite sign. From integrating both sides of the equation with respect to the time, the constrained coordinates of particles at the time, , are given, where is the unconstrained (or uncorrected) position of the ith particle after integrating the unconstrained equations of motion. To satisfy the constraints in the next timestep, the Lagrange multipliers should be determined as the following equation, This implies solving a system of non-linear equations simultaneously for the unknown Lagrange multipliers . This system of non-linear equations in unknowns is commonly solved using Newton–Raphson method where the solution vector is updated using where is the Jacobian of the equations σk: Since not all particles contribute to all of constraints, is a block matrix and can be solved individually to block-unit of the matrix. In other words, can be solved individually for each molecule. Instead of constantly updating the vector , the iteration can be started with , resulting in simpler expressions for and . In this case then is updated to After each iteration, the unconstrained particle positions are updated using The vector is then reset to The above procedure is repeated until the solution of constraint equations, , converges to a prescribed tolerance of a numerical error. Although there are a number of algorithms to compute the Lagrange multipliers, these difference is rely only on the methods to solve the system of equations. For this methods, quasi-Newton methods are commonly used. The SETTLE algorithm The SETTLE algorithm solves the system of non-linear equations analytically for constraints in constant time. Although it does not scale to larger numbers of constraints, it is very often used to constrain rigid water molecules, which are present in almost all biological simulations and are usually modelled using three constraints (e.g. SPC/E and TIP3P water models). The SHAKE algorithm The SHAKE algorithm was first developed for satisfying a bond geometry constraint during molecular dynamics simulations. The method was then generalised to handle any holonomic constraint, such as those required to maintain constant bond angles, or molecular rigidity. In SHAKE algorithm, the system of non-linear constraint equations is solved using the Gauss–Seidel method which approximates the solution of the linear system of equations using the Newton–Raphson method; This amounts to assuming that is diagonally dominant and solving the th equation only for the unknown. In practice, we compute for all iteratively until the constraint equations are solved to a given tolerance. The calculation cost of each iteration is , and the iterations themselves converge linearly. A noniterative form of SHAKE was developed later on. Several variants of the SHAKE algorithm exist. Although they differ in how they compute or apply the constraints themselves, the constraints are still modelled using Lagrange multipliers which are computed using the Gauss–Seidel method. The original SHAKE algorithm is capable of constraining both rigid and flexible molecules (eg. water, benzene and biphenyl) and introduces negligible error or energy drift into a molecular dynamics simulation. One issue with SHAKE is that the number of iterations required to reach a certain level of convergence does rise as molecular geometry becomes more complex. To reach 64 bit computer accuracy (a relative tolerance of ) in a typical molecular dynamics simulation at a temperature of 310K, a 3-site water model having 3 constraints to maintain molecular geometry requires an average of 9 iterations (which is 3 per site per time-step). A 4-site butane model with 5 constraints needs 17 iterations (22 per site), a 6-site benzene model with 12 constraints needs 36 iterations (72 per site), while a 12-site biphenyl model with 29 constraints requires 92 iterations (229 per site per time-step). Hence the CPU requirements of the SHAKE algorithm can become significant, particularly if a molecular model has a high degree of rigidity. A later extension of the method, QSHAKE (Quaternion SHAKE) was developed as a faster alternative for molecules composed of rigid units, but it is not as general purpose. It works satisfactorily for rigid loops such as aromatic ring systems but QSHAKE fails for flexible loops, such as when a protein has a disulfide bond. Further extensions include RATTLE, WIGGLE, and MSHAKE. While RATTLE works the same way as SHAKE, yet using the Velocity Verlet time integration scheme, WIGGLE extends SHAKE and RATTLE by using an initial estimate for the Lagrange multipliers based on the particle velocities. It is worth mentioning that MSHAKE computes corrections on the constraint forces, achieving better convergence. A final modification to the SHAKE algorithm is the P-SHAKE algorithm that is applied to very rigid or semi-rigid molecules. P-SHAKE computes and updates a pre-conditioner which is applied to the constraint gradients before the SHAKE iteration, causing the Jacobian to become diagonal or strongly diagonally dominant. The thus de-coupled constraints converge much faster (quadratically as opposed to linearly) at a cost of . The M-SHAKE algorithm The M-SHAKE algorithm solves the non-linear system of equations using Newton's method directly. In each iteration, the linear system of equations is solved exactly using an LU decomposition. Each iteration costs operations, yet the solution converges quadratically, requiring fewer iterations than SHAKE. This solution was first proposed in 1986 by Ciccotti and Ryckaert under the title "the matrix method", yet differed in the solution of the linear system of equations. Ciccotti and Ryckaert suggest inverting the matrix directly, yet doing so only once, in the first iteration. The first iteration then costs operations, whereas the following iterations cost only operations (for the matrix-vector multiplication). This improvement comes at a cost though, since the Jacobian is no longer updated, convergence is only linear, albeit at a much faster rate than for the SHAKE algorithm. Several variants of this approach based on sparse matrix techniques were studied by Barth et al.. SHAPE algorithm The SHAPE algorithm is a multicenter analog of SHAKE for constraining rigid bodies of three or more centers. Like SHAKE, an unconstrained step is taken and then corrected by directly calculating and applying the rigid body rotation matrix that satisfies: This approach involves a single 3×3 matrix diagonalization followed by three or four rapid Newton iterations to determine the rotation matrix. SHAPE provides the identical trajectory that is provided with fully converged iterative SHAKE, yet it is found to be more efficient and more accurate than SHAKE when applied to systems involving three or more centers. It extends the ability of SHAKE like constraints to linear systems with three or more atoms, planar systems with four or more atoms, and to significantly larger rigid structures where SHAKE is intractable. It also allows rigid bodies to be linked with one or two common centers (e.g. peptide planes) by solving rigid body constraints iteratively in the same basic manner that SHAKE is used for atoms involving more than one SHAKE constraint. LINCS algorithm An alternative constraint method, LINCS (Linear Constraint Solver) was developed in 1997 by Hess, Bekker, Berendsen and Fraaije, and was based on the 1986 method of Edberg, Evans and Morriss (EEM), and a modification thereof by Baranyai and Evans (BE). LINCS applies Lagrange multipliers to the constraint forces and solves for the multipliers by using a series expansion to approximate the inverse of the Jacobian : in each step of the Newton iteration. This approximation only works for matrices with eigenvalues smaller than 1, making the LINCS algorithm suitable only for molecules with low connectivity. LINCS has been reported to be 3–4 times faster than SHAKE. Hybrid methods Hybrid methods have also been introduced in which the constraints are divided into two groups; the constraints of the first group are solved using internal coordinates whereas those of the second group are solved using constraint forces, e.g., by a Lagrange multiplier or projection method. This approach was pioneered by Lagrange, and result in Lagrange equations of the mixed type. See also Molecular dynamics Software for molecular mechanics modeling References and footnotes Molecular dynamics Computational chemistry Molecular physics Computational physics Numerical differential equations
Constraint (computational chemistry)
[ "Physics", "Chemistry" ]
2,823
[ "Molecular physics", "Computational physics", "Molecular dynamics", "Theoretical chemistry", "Computational chemistry", " molecular", "nan", "Atomic", " and optical physics" ]
9,358,077
https://en.wikipedia.org/wiki/Core%20router
A core router is a router designed to operate in the Internet backbone, or core, or in core networks of internet service providers. To fulfill this role, a router must be able to support multiple telecommunications interfaces of the highest speed in use in the core Internet and must be able to forward IP packets at full speed on all of them. It must also support the routing protocols being used in the core. A core router is distinct from an edge router: edge routers sit at the edge of a backbone network and connect to core routers. History Like the term "supercomputer", the term "core router" refers to the largest and most capable routers of the then-current generation. A router that was a core router when introduced would likely not be a core router ten years later. Although the local area NPL network was using line speeds of 768 kbit/s from 1967, at the inception of the ARPANET (the Internet's predecessor) in 1969, the fastest links were 56 kbit/s. A given routing node had at most six links. The "core router" was a dedicated minicomputer called an IMP Interface Message Processor. Link speeds increased steadily, requiring progressively more powerful routers until the mid-1990s, when the typical core link speed reached 155 Mbit/s. At that time, several breakthroughs in fiber optic telecommunications (notably DWDM and EDFA) technologies combined to lower bandwidth costs that in turn drove a sudden dramatic increase in core link speeds: by 2000, a core link operated at 2.5 Gbit/s and core Internet companies were planning for 10 Gbit/s speeds. The largest provider of core routers in the 1990s was Cisco Systems, who provided core routers as part of a broad product line. Juniper Networks entered the business in 1996, focusing primarily on core routers and addressing the need for a radical increase in routing capability that was driven by the increased link speed. In addition, several new companies attempted to develop new core routers in the late 1990s. It was during this period that the term "core router" came into wide use. The required forwarding rate of these routers became so high that it could not be met with a single processor or a single memory, so these systems all employed some form of a distributed architecture based on an internal switching fabric. The Internet was historically supply-limited, and core Internet providers historically struggled to expand the Internet to meet the demand. During the late 1990s, they expected a radical increase in demand, driven by the Dot-com bubble. By 2001, it became apparent that the sudden expansion in core link capacity had outstripped the actual demand for Internet bandwidth in the core. The core Internet providers were able to defer purchases of new core routers for a time, and most of the new companies went out of business. As of 2012, the typical Internet core link speed is 40 Gbit/s, with many links at higher speeds, reaching or exceeding 100 Gbit/s (out of a theoretical current maximum of 111 Gbit/s, provided by Nippon Telegraph and Telephone ), provisioning the explosion in demand for bandwidth in the current generation of cloud computing and other bandwidth-intensive (and often latency-sensitive) applications such as high-definition video streaming (see IPTV) and Voice over IP. This, along with newer technologies – such as DOCSIS 3, channel bonding, and VDSL2 (the latter of which can wring more than 100 Mbit/s out of plain, unshielded twisted-pair copper under normal conditions, out of a theoretical maximum of 250 Gbit/s at 0.0m from the VRAD) – and more sophisticated provisioning systems – such as FTTN (fiber [optic cable] to the node) and FTTP (fiber to the premises, either to the home or provisioned with Cat 5e cable) – can provide downstream speeds to the mass-market residential consumer in excess of 300 Mbit/s and upload speeds in excess of 100 Mbit/s with no specialized equipment or modification e.g.(Verizon FiOS). Current core router manufacturers (core router model between parentheses) Nokia (7950 Extensible Routing System [XRS] Series, 7750 series) Ciena (Ciena 5430 15T, Ciena 6500) Cisco Systems (8000 series, CRS (former), Network Convergence System 6000) Extreme Networks (Black Diamond 20808) Ericsson (SSR series) Huawei Technologies Ltd. (NetEngine 9000 (NE9000), NetEngine 5000E, NetEngine 80E, NetEngine 80) Juniper Networks (Juniper T-Series and PTX Series) ZTE (ZXR10 Series: T8000, M6000) Previous core router manufacturers Alcatel-Lucent (acquired by Nokia in 2016) Allegro Networks Axiowave Networks Avici Systems (changed name to Soapstone Networks in 2008 and no longer makes core routers) Brocade Communications Systems (NetIron XMR Series) Caspian Networks (closed in 2006, makers of core routers A120 and A50) Charlotte's Web Networks Chiaro Networks (closed in 2005, maker of Chiaro Enstara core routers) Foundry Networks (acquired by Brocade in 2008) Hyperchip IPOptical Ironbridge Marconi (telecom business acquired by Ericsson in 2006) Nortel Networks (bankrupt) Osphere Net Systems Pluris Procket Networks (acquired by Cisco Systems in 2004) See also Cisco Systems acquisitions Edge router Network topology References Internet architecture Hardware routers
Core router
[ "Technology" ]
1,175
[ "Internet architecture", "IT infrastructure" ]
9,358,712
https://en.wikipedia.org/wiki/Potato%20dextrose%20agar
Potato dextrose agar (BAM Media M127) and potato dextrose broth are common microbiological growth media made from potato infusion and dextrose. Potato dextrose agar (abbreviated "PDA") is the most widely used medium for growing fungi and bacteria. PDA has the capability to culture various bacteria and fungi found in the soil. This agar can be used with antibiotics or acid to inhibit bacterial/fungal growth. PDA is used in the food industry to test for fungi that can spoil food products. It is also used in the pharmaceutical industry to screen for potential antifungal agents in medications. Potato dextrose agar is a versatile growing medium for bacteria and fungi (yeasts and molds). This agar is used for a broad range of fungi but there are other agars that are more selective for specific types of fungi. These agars include but are not limited to malt extract agar and sabouraud agar. Malt extract agar is more acidic than PDA and is commonly used to cultivate penicillium species. Sabouraud agar is also slightly acid with pH of 5.6-6.0 which is similar to PDA. It is most often used for the isolation of pathogenic fungi such as dermatophytes. Typical composition {| class=wikitable style="text-align:center;" |- ! value || ingredients & conditions |- || 1000 mL || water |- || (strained broth from 200 g of infused potato into the water above) || potatoes(sliced washed unpeeled) |- || 20 g || dextrose |- || 20 g || agar powder |- || 5.6±0.2 || final pH |- || 25°C || temperature |} Potato infusion can be made by boiling of sliced (washed but unpeeled) potatoes in ~ distilled water for 30 minutes and then decanting or straining the broth through cheesecloth. Distilled water is added such that the total volume of the suspension is . dextrose and agar powder is then added and the medium is sterilized by autoclaving at for 15 minutes. A similar growth medium, potato dextrose broth (abbreviated "PDB"), is formulated identically to PDA, omitting the agar. Common organisms that can be cultured on PDB are yeasts such as Candida albicans and Saccharomyces cerevisiae and molds such as Aspergillus niger. References Further reading Atlas, R.M.: Handbook of Microbiological Media, second edition. Lawrence C. Parks (1997) Microbiological media Potatoes
Potato dextrose agar
[ "Biology" ]
581
[ "Microbiological media", "Microbiology equipment" ]
9,359,843
https://en.wikipedia.org/wiki/Institut%20national%20de%20recherche%20en%20sciences%20et%20technologies%20pour%20l%27environnement%20et%20l%27agriculture
The Institut national de recherche en sciences et technologies pour l'environnement et l'agriculture (; "National Institute of Scientific and Technological Research for Environment and Agriculture"; IRSTEA), formerly known as Cemagref, was a public research institute in France focusing on land management issues, such as water resources and agricultural technology. From 1 January 2020 the IRSTEA merged with the INRA (Institut national de la recherche agronomique) to create the INRAE (Institut national de recherche pour l'agriculture, l'alimentation et l'environnement). Organization IRSTEA/Cemagref had an annual operating budget of €81.6 Million in 2011, and employed nearly 1350 staff, including 950 permanent staff, others being graduate students, 200 doctoral candidates, interns and foreign researchers. About 250 master-degree trainees contributed also to some of its activities. There were 9 research sites containing a total of 29 research units. In addition to published research, the institute collaborates with other research organizations and took in a portion of its income from contract work. IRSTEA is a member of the UniverSud Paris. Partner organizations Irstea collaborated in a number of other research networks, including European Network of Freshwater Research Organisations (EurAqua), the Partnership for European Environmental Research (PEER), European network for testing of agricultural machines (ENTAM), and the European Network of Engineering for Agriculture and Environment (ENGAGE). Notes and references External links Official website Environmental research institutes Agricultural research institutes in France
Institut national de recherche en sciences et technologies pour l'environnement et l'agriculture
[ "Environmental_science" ]
323
[ "Environmental research institutes", "Environmental research" ]
9,360,334
https://en.wikipedia.org/wiki/Modified-release%20dosage
Modified-release dosage is a mechanism that (in contrast to immediate-release dosage) delivers a drug with a delay after its administration (delayed-release dosage) or for a prolonged period of time (extended-release [ER, XR, XL] dosage) or to a specific target in the body (targeted-release dosage). Sustained-release dosage forms are dosage forms designed to release (liberate) a drug at a predetermined rate in order to maintain a constant drug concentration for a specific period of time with minimum side effects. This can be achieved through a variety of formulations, including liposomes and drug-polymer conjugates (an example being hydrogels). Sustained release's definition is more akin to a "controlled release" rather than "sustained". Extended-release dosage consists of either sustained-release (SR) or controlled-release (CR) dosage. SR maintains drug release over a sustained period but not at a constant rate. CR maintains drug release over a sustained period at a nearly constant rate. Sometimes these and other terms are treated as synonyms, but the United States Food and Drug Administration has in fact defined most of these as different concepts. Sometimes the term "depot tablet" is used, by analogy to the term for an injection formulation of a drug which releases slowly over time, but this term is not medically or pharmaceutically standard for oral medication. Modified-release dosage and its variants are mechanisms used in tablets (pills) and capsules to dissolve a drug over time in order to be released more slowly and steadily into the bloodstream, while having the advantage of being taken at less frequent intervals than immediate-release (IR) formulations of the same drug. For example, orally administered extended-release morphine can enable certain chronic pain patients to take only tablets per day, rather than needing to redose every as is typical with standard-release morphine tablets. Most commonly it refers to time-dependent release in oral dose formulations. Timed release has several distinct variants such as sustained release where prolonged release is intended, pulse release, delayed release (e.g. to target different regions of the GI tract) etc. A distinction of controlled release is that it not only prolongs action, but it attempts to maintain drug levels within the therapeutic window to avoid potentially hazardous peaks in drug concentration following ingestion or injection and to maximize therapeutic efficiency. In addition to pills, the mechanism can also apply to capsules and injectable drug carriers (that often have an additional release function), forms of controlled release medicines include gels, implants and devices (e.g. the vaginal ring and contraceptive implant) and transdermal patches. Examples for cosmetic, personal care, and food science applications often centre on odour or flavour release. The release technology scientific and industrial community is represented by the Controlled Release Society (CRS). The CRS is the worldwide society for delivery science and technologies. CRS serves more than 1,600 members from more than 50 countries. Two-thirds of CRS membership is represented by industry and one-third represents academia and government. CRS is affiliated with the Journal of Controlled Release and Drug Delivery and Translational Research scientific journals. List of abbreviations There is no industry standard for these abbreviations, and confusion and misreading have sometimes caused prescribing errors. Clear handwriting is necessary. For some drugs with multiple formulations, putting the meaning in parentheses is advisable. A few other abbreviations are similar to these (in that they may serve as suffixes) but refer to dose rather than release rate. They include ES and XS (Extra Strength). Methods Today, most time-release drugs are formulated so that the active ingredient is embedded in a matrix of insoluble substance(s) (various: some acrylics, even chitin; these substances are often patented) such that the dissolving drug must find its way out through the holes. In some SR formulations, the drug dissolves into the matrix, and the matrix physically swells to form a gel, allowing the drug to exit through the gel's outer surface. Micro-encapsulation is also regarded as a more complete technology to produce complex dissolution profiles. Through coating an active pharmaceutical ingredient around an inert core and layering it with insoluble substances to form a microsphere, one can obtain more consistent and replicable dissolution rates in a convenient format that can be mixed and matched with other instant release pharmaceutical ingredients into any two piece gelatin capsule. There are certain considerations for the formation of sustained-release formulation: If the pharmacological activity of the active compound is not related to its blood levels, time releasing has no purpose except in some cases, such as bupropion, to reduce possible side effects. If the absorption of the active compound involves an active transport, the development of a time-release product may be problematic. The biological half-life of the drug refers to the drug's elimination from the bloodstream which can be caused by metabolism, urine, and other forms of excretion. If the active compound has a long half-life (over 6 hours), it is sustained on its own. If the active compound has a short half-life, it would require a large amount to maintain a prolonged effective dose. In this case, a broad therapeutic window is necessary to avoid toxicity; otherwise, the risk is unwarranted and another mode of administration would be recommended. Appropriate half-lives used to apply sustained methods are typically 3–4 hours and a drug dose greater than 0.5 grams is too high. The therapeutic index also factors whether a drug can be used as a time release drug. A drug with a thin therapeutic range, or small therapeutic index, will be determined unfit for a sustained release mechanism in partial fear of dose dumping which can prove fatal at the conditions mentioned. For a drug that is made to be released over time, the objective is to stay within the therapeutic range as long as needed. There are many different methods used to obtain a sustained release. Diffusion systems Diffusion systems' rate release is dependent on the rate at which the drug dissolves through a barrier which is usually a type of polymer. Diffusion systems can be broken into two subcategories, reservoir devices and matrix devices. Reservoir devices coat the drug with polymers and in order for the reservoir devices to have sustained-release effects, the polymer must not dissolve and let the drug be released through diffusion. The rate of reservoir devices can be altered by changing the polymer and is possible be made to have zero-order release; however, drugs with higher molecular weight have difficulty diffusing through the membrane. Matrix devices forms a matrix (drug(s) mixed with a gelling agent) where the drug is dissolved/dispersed. The drug is usually dispersed within a polymer and then released by undergoing diffusion. However, to make the drug SR in this device, the rate of dissolution of the drug within the matrix needs to be higher than the rate at which it is released. The matrix device cannot achieve a zero-order release but higher molecular weight molecules can be used. The diffusion matrix device also tends to be easier to produce and protect from changing in the gastrointestinal tract, but factors such as food can affect the release rate. Dissolution systems Dissolution systems must have the system dissolved slowly in order for the drug to have sustained release properties which can be achieved by using appropriate salts and/or derivatives as well as coating the drug with a dissolving material. It is used for drug compounds with high solubility in water. When the drug is covered with some slow dissolving coat, it will eventually release the drug. Instead of diffusion, the drug release depends on the solubility and thickness of the coating. Because of this mechanism, the dissolution will be the rate limiting factor for drug release. Dissolution systems can be broken down to subcategories called reservoir devices and matrix devices. The reservoir device coats the drug with an appropriate material which will dissolve slowly. It can also be used to administer beads as a group with varying thickness, making the drug release in multiple times creating a SR. The matrix device has the drug in a matrix and the matrix is dissolved instead of a coating. It can come either as drug-impregnated spheres or drug-impregnated tablets. Osmotic systems Osmotic controlled-release oral delivery systems (OROS) have the form of a rigid tablet with a semi-permeable outer membrane and one or more small laser drilled holes in it. As the tablet passes through the body, water is absorbed through the semipermeable membrane via osmosis, and the resulting osmotic pressure is used to push the active drug through the opening(s) in the tablet. OROS is a trademarked name owned by ALZA Corporation, which pioneered the use of osmotic pumps for oral drug delivery. Osmotic release systems have a number of major advantages over other controlled-release mechanisms. They are significantly less affected by factors such as pH, food intake, GI motility, and differing intestinal environments. Using an osmotic pump to deliver drugs has additional inherent advantages regarding control over drug delivery rates. This allows for much more precise drug delivery over an extended period of time, which results in much more predictable pharmacokinetics. However, osmotic release systems are relatively complicated, somewhat difficult to manufacture, and may cause irritation or even blockage of the GI tract due to prolonged release of irritating drugs from the non-deformable tablet. Ion-exchange resin In the ion-exchange method, the resins are cross-linked water-insoluble polymers that contain ionisable functional groups that form a repeating pattern of polymers, creating a polymer chain. The drug is attached to the resin and is released when an appropriate interaction of ions and ion exchange groups occur. The area and length of the drug release and number of cross-link polymers dictate the rate at which the drug is released, determining the SR effect. Floating systems A floating system is a system where it floats on gastric fluids due to low density. The density of the gastric fluids is about 1 g/mL; thus, the drug/tablet administered must have a smaller density. The buoyancy will allow the system to float to the top of the stomach and release at a slower rate without worry of excreting it. This system requires that there are enough gastric fluids present as well as food. Many types of forms of drugs use this method such as powders, capsules, and tablets. Bio-adhesive systems Bio-adhesive systems generally are meant to stick to mucus and can be favorable for mouth based interactions due to high mucus levels in the general area but not as simple for other areas. Magnetic materials can be added to the drug so another magnet can hold it from outside the body to assist in holding the system in place. However, there is low patient compliance with this system. Matrix systems The matrix system is the mixture of materials with the drug, which will cause the drug to slow down. However, this system has several subcategories: hydrophobic matrices, lipid matrices, hydrophilic matrices, biodegradable matrices, and mineral matrices. A hydrophobic matrix is a drug mixed with a hydrophobic polymer. This causes SR because the drug, after being dissolved, will have to be released by going through channels made by the hydrophilic polymer. A hydrophilic matrix will go back to the matrix as discussed before where a matrix is a mixture of a drug or drugs with a gelling agent. This system is well liked because of its cost and broad regulatory acceptance. The polymers used can be broken down into categories: cellulose derivatives, non-cellulose natural, and polymers of acrylic acid. A lipid matrix uses wax or similar materials. Drug release happens via diffusion through, and erosion of, the wax and tends to be sensitive to digestive fluids. Biodegradable matrices are made with unstable, linked monomers that will erode by biological compounds such as enzymes and proteins. A mineral matrix which generally means the polymers used are obtained in seaweed. Stimuli inducing release Examples of stimuli that may be used to bring about release include pH, enzymes, light, magnetic fields, temperature, ultrasonics, osmosis, cellular traction forces, and electronic control of MEMS and NEMS. Spherical hydrogels, in micro-size (50-600 μm diameter) with 3-dimensional cross-linked polymer, can be used as drug carrier to control the release of the drug. These hydrogels are called microgels. They may possess a negative charge as example DC-beads. By ion-exchange mechanism, a large amount of oppositely charged amphiphilic drugs can be loaded inside these microgels. Then, the release of these drugs can be controlled by a specific triggering factor like pH, ionic strength or temperature. Pill splitting Some time release formulations do not work properly if split, such as controlled-release tablet coatings, while other formulations such as micro-encapsulation still work if the microcapsules inside are swallowed whole. Among the health information technology (HIT) that pharmacists use are medication safety tools to help manage this problem. For example, the ISMP "do not crush" list can be entered into the system so that warning stickers can be printed at the point of dispensing, to be stuck on the pill bottle. Pharmaceutical companies that do not supply a range of half-dose and quarter-dose versions of time-release tablets can make it difficult for patients to be slowly tapered off their drugs. History The earliest SR drugs are associated with a patent in 1938 by Israel Lipowski, who coated pellets which led to coating particles. The science of controlled release developed further with more oral sustained-release products in the late 1940s and early 1950s, the development of controlled release of marine anti-foulants in the 1950s, and controlled release fertilizer in the 1970s where sustained and controlled delivery of nutrients was achieved following a single application to the soil. Delivery is usually effected by dissolution, degradation, or disintegration of an excipient in which the active compound is formulated. Enteric coating and other encapsulation technologies can further modify release profiles. See also Depot injection Tablet (pharmacy) Footnotes External links Controlled Release Society United Kingdom & Ireland Controlled Release Society Controlled Release Technology 5-day short course at MIT with Professor Robert Langer. Dosage forms Routes of administration Drug delivery devices Pharmacokinetics
Modified-release dosage
[ "Chemistry" ]
3,019
[ "Pharmacology", "Drug delivery devices", "Pharmacokinetics", "Routes of administration" ]
9,360,778
https://en.wikipedia.org/wiki/Proxy%20list
A proxy list is a list of open HTTP/HTTPS/SOCKS proxy servers all on one website. Proxies allow users to make indirect network connections to other computer network services. Proxy lists include the IP addresses of computers hosting open proxy servers, meaning that these proxy servers are available to anyone on the internet. Proxy lists are often organized by the various proxy protocols the servers use. Many proxy lists index Web proxies, which can be used without changing browser settings. Proxy Anonymity Levels Elite proxies - Such proxies do not change request fields and look like a real browser, and your real IP address is hidden. Server administrators will commonly be fooled into believing that you are not using a proxy. Anonymous proxies - These proxies do not show a real IP address, however, they do change the request fields, therefore it is very easy to detect that a proxy is being used by log analysis. You are still anonymous, but some server administrators may restrict proxy requests. Transparent proxies - (not anonymous, simply HTTP) - These change the request fields and they transfer the real IP. Such proxies are not applicable for security or privacy uses while surfing the web, and should only be used for network speed improvement. SOCKS is a protocol that relays TCP sessions through a firewall host to allow application users transparent access across the firewall. Because the protocol is independent of application protocols, it can be (and has been) used for many different services, such as telnet, FTP, finger, whois, gopher, WWW, etc. Access control can be applied at the beginning of each TCP session; thereafter the server simply relays the data between the client and the application server, incurring minimum processing overhead. Since SOCKS never has to know anything about the application protocol, it should also be easy for it to accommodate applications that use encryption to protect their traffic from nosy snoopers. No information about the client is sent to the server – thus there is no need to test the anonymity level of the SOCKS proxies. References External links Computer network security Computer networking Internet privacy Computer security software
Proxy list
[ "Technology", "Engineering" ]
439
[ "Computer networking", "Cybersecurity engineering", "Computer engineering", "Computer networks engineering", "Computer security software", "Computer science", "Computer network security" ]
9,360,859
https://en.wikipedia.org/wiki/Nitrazine
Nitrazine or phenaphthazine is a pH indicator dye often used in medicine. More sensitive than litmus, nitrazine indicates pH in the range of 4.5 to 7.5. Nitrazine is usually used as the disodium salt. Use This test is done to ascertain the nature of fluid in the vagina during pregnancy especially when premature rupture of membranes (PROM) is suspect. This test involves putting a drop of fluid obtained from the vagina onto paper strips containing nitrazine dye. The strips change color depending on the pH of the fluid. The strips will turn blue if the pH is greater than 6.0. A blue strip means it's more likely the membranes have ruptured. This test, however, can produce false positives. If blood gets in the sample or if there is an infection present, the pH of the vaginal fluid may be higher than normal. Semen also has a higher pH, so recent vaginal intercourse can produce a false reading. To perform a fecal pH test for diagnosing intestinal infections or other digestive problems In civil engineering, to determine the carbonatation spread in concrete structures and therefore assess the state of the rebar's passivation film. References Stool tests PH indicators Nitrobenzene derivatives Naphthalenesulfonic acids Anilines Obstetric drugs Obstetrics
Nitrazine
[ "Chemistry", "Materials_science" ]
288
[ "Titration", "PH indicators", "Chromism", "Chemical tests", "Equilibrium chemistry" ]
9,361,016
https://en.wikipedia.org/wiki/Chromodomain
A chromodomain (chromatin organization modifier) is a protein structural domain of about 40–50 amino acid residues commonly found in proteins associated with the remodeling and manipulation of chromatin. The domain is highly conserved among both plants and animals, and is represented in a large number of different proteins in many genomes, such as that of the mouse. Some chromodomain-containing genes have multiple alternative splicing isoforms that omit the chromodomain entirely. In mammals, chromodomain-containing proteins are responsible for aspects of gene regulation related to chromatin remodeling and formation of heterochromatin regions. Chromodomain-containing proteins also bind methylated histones and appear in the RNA-induced transcriptional silencing complex. In histone modifications, chromodomains are very conserved. They function by identifying and binding to methylated lysine residues that exist on the surface of chromatin proteins and thereby regulate gene transcription. See also Bromodomain Chromo shadow domain References External links Chromatin Remodeling: Chromodomains at cellsignal.com Protein domains
Chromodomain
[ "Chemistry", "Biology" ]
251
[ "Biochemistry stubs", "Protein stubs", "Protein domains", "Protein classification" ]
9,361,398
https://en.wikipedia.org/wiki/Lipofectamine
Lipofectamine or Lipofectamine 2000 is a common transfection reagent, produced and sold by Invitrogen, used in molecular and cellular biology. It is used to increase the transfection efficiency of RNA (including mRNA and siRNA) or plasmid DNA into in vitro cell cultures by lipofection. Lipofectamine contains lipid subunits that can form liposomes in an aqueous environment, which entrap the transfection payload, e.g. DNA plasmids. Lipofectamine consists of a 3:1 mixture of DOSPA (2,3‐dioleoyloxy‐N‐ [2(sperminecarboxamido)ethyl]‐N,N‐dimethyl‐1‐propaniminium trifluoroacetate) and DOPE, which complexes with negatively charged nucleic acid molecules to allow them to overcome the electrostatic repulsion of the cell membrane. Lipofectamine's cationic lipid molecules are formulated with a neutral co-lipid (helper lipid). The DNA-containing liposomes (positively charged on their surface) can fuse with the negatively charged plasma membrane of living cells, due to the neutral co-lipid mediating fusion of the liposome with the cell membrane, allowing nucleic acid cargo molecules to cross into the cytoplasm for replication or expression. In order for a cell to express a transgene, the nucleic acid must reach the nucleus of the cell to begin transcription. However, the transfected genetic material may never reach the nucleus in the first place, instead being disrupted somewhere along the delivery process. In dividing cells, the material may reach the nucleus by being trapped in the reassembling nuclear envelope following mitosis. But also in non-dividing cells, research has shown that Lipofectamine improves the efficiency of transfection, which suggests that it additionally helps the transfected genetic material penetrate the intact nuclear envelope. This method of transfection was invented by Dr. Yongliang Chu. See also Lipofection Transfection Vectors in gene therapy Cationic liposome References US Active US7479573B2, Yongliang Chu; Malek Masoud & Gulliat Gebeyehu, "Transfection reagents", assigned to Life Technologies Corp and Invitrogen Group Molecular biology Gene delivery
Lipofectamine
[ "Chemistry", "Biology" ]
507
[ "Genetics techniques", "Molecular biology techniques", "Molecular biology", "Biochemistry", "Gene delivery" ]
11,833,672
https://en.wikipedia.org/wiki/Polystannane
Polystannanes are organotin compounds with the formula (R2Sn)n. These polymers have been of intermittent academic interest; they are unusual because heavy elements comprise the backbone. Structurally related but better characterized (and more useful) are the polysilanes (R2Si)n. History and synthesis Oligo- or polystannanes were first described by Löwig in 1852, only 2 years after Edward Frankland's report on the isolation of the first organotin compounds. Löwig's route involved treating an Sn/K and Sn/Na alloys with iodoethane, in the presence of quartz sand which was used to control the reaction rate. Products with elemental compositions close to those of oligo(diethylstannane)s or poly(diethylstannane) were obtained. Cahours obtained similar products and attributed the formation of the so-called "stannic ethyl" to a reaction of the Wurtz type. Already in 1858, "stannic ethyl" was formulated as a polymeric compound denoted with the composition n(SnC4H5). In 1917 Grüttner, who reinvestigated results on hexaethyl-distannanes(H5C2)3Sn-Sn(C2H5)3 (reported by Ladenburg in 1870) confirmed the presence of Sn-Sn bonds and predicated for the first time that tin could form chain like compounds. In 1943, it was postulated that “diphenyltin” exists as a type of polymeric material because of its yellow color, and indeed a bathochromic shift of the wavelength at maximum absorption with increasing number of Sn atoms was found later in the case of oligo(dibutylstannane)s comprising up to 15 Sn atoms. The Wurtz reaction is still used for the preparation of poly(dialkylstannane)s. Treatment of dialkyltin dichlorides with sodium lead to polystannanes of high molar mass, however, in low yields and with formation of (cyclic) oligomers. Other efforts to prepare high molar mass polystannanes by electrochemical reactions or by catalytic dehydropolymerization of dialkylstannanes (R2SnH2) were also made. Unfortunately, frequently, the polymers prepared by those methods were not isolated and typically contained significant fractions of cyclic oligomers. Alternatively, alkyltin halides react with excess electride in ammonia solutions to give metal alkylstannides. Added alkyltin halides then couple to the stannides to give polystannanes. Linear polystannanes Dialkytin dihydrides (R2SnH2) were reported in 2005 to undergo dehydropolymerization in the presence of Wilkinson’s catalyst. This method afforded polystannanes without detectable amounts of "cyclic"-byproducts. The polymers were yellow with number average molar masses of 10 to 70 kg/mol and a polydispersity of 2 – 3. By variation of the catalyst concentration the molar masses of the synthesized polymers could be adjusted. A strong influence of the temperature on the degree of conversion was observed. Determination of the molar mass at different degrees of conversion indicated that polymerization did not proceed according to a statistical condensation mechanism, but, likely, by growth onto the catalyst, e.g. by insertion of SnR2-like units. The poly(dialkylstannane)s were found to be thermotropic and displayed first-order phase transitions from one liquid-crystalline phase into another or directly to the isotropic state, depending on the length of the side groups. More specifically, poly(dibutylstannane) for example showed an endothermic phase transition at ~0 °C from a rectangular to a pure nematic phase, as determined by X-ray diffraction. Like polysilanes, polystannanes are semi-conductive. Temperature-dependent, time-resolved pulse radiolysis microwave conductivity measurements of poly(dibutylstannane) yielded values of charge-carrier mobilities of 0.1 to 0.03 cm2 V−1 s−1, which are similar to those found for pi-bond-conjugated carbon-based polymers. By partial oxidation of the material with SbF5 conductivities of 0.3 S cm−1 could be monitored. The liquid-crystalline characteristics of the poly(dialkylstannane)s permitted facile orientation of these macromolecules, for instance, by mechanical shearing or tensile drawing of blends with poly(ethylene). Poly(dialkylstannane)s with short side groups invariably arranged parallel to the external orientation direction, while the polymers with longer side groups had a tendency to order themselves perpendicular to that axis. References External links Fabien Choffat (2007) Polystannane, Doctoral dissertation, Swiss Federal Institute of Technology, Zürich. Polymers Inorganic polymers Conductive polymers Plastics Organotin compounds Tin(II) compounds
Polystannane
[ "Physics", "Chemistry", "Materials_science" ]
1,080
[ "Inorganic compounds", "Inorganic polymers", "Unsolved problems in physics", "Molecular electronics", "Polymer chemistry", "Polymers", "Amorphous solids", "Conductive polymers", "Plastics" ]
11,834,300
https://en.wikipedia.org/wiki/Latent%20TGF-beta%20binding%20protein
The latent TGF-beta binding proteins (LTBP) are a family of carrier proteins. LTBP is a family of secreted multidomain proteins that were originally identified by their association with the latent form of transforming growth factors. They interact with a variety of extracellular matrix proteins and may play a role in the regulation of TGF beta bioavailability. Genes References External links PDBe-KB provides an overview of all the structure information available in the PDB for Human Latent-transforming growth factor beta-binding protein 1
Latent TGF-beta binding protein
[ "Chemistry" ]
114
[ "Biochemistry stubs", "Protein stubs" ]
11,834,731
https://en.wikipedia.org/wiki/Cerberus%20%28protein%29
Cerberus is a protein that in humans is encoded by the CER1 gene. Cerberus is a signaling molecule which contributes to the formation of the head, heart and left-right asymmetry of internal organs. This gene varies slightly from species to species but its overall functions seem to be similar. Cerberus is secreted by the anterior visceral endoderm and blocks the action of BMP, Nodal and Wnt, secreted by the primitive node, which allows for the formation of a head region. This is accomplished by inhibiting the formation of mesoderm in this region. Xenopus Cerberus causes a protein to be secreted that is able to induce the formation of an ectopic head. Knockdown experiments have helped to explain Cerberus's role in both the formation of the head and left and right symmetry. These experiments have shown that Cerberus helps to keep Nodal from crossing to the right side of the developing embryo, allowing left and right asymmetry to form. This is why misexpression of Cerberus can cause the heart to fold in the opposite direction during development. When Cerberus is “knocked down” and BMP and Wnt are up regulated the head does not form. Other experiments using mice that this gene has been “knocked out” showed no head defects, which suggest that it is the combination of the up regulation of BMP and Wnt along with the absence of Cerberus that causes this defect. For the heart, Cerberus is one of several factors that inhibits Nodal to initiate cardiomyogenic differentiation The Cerberus gene family produces many different signal proteins that are antagonistically involved in establishing anterior-posterior patterning and left-right patterning in vertebrate embryos. Function Cerberus is an inhibitor in the TGF beta signaling pathway secreted during the gastrulation phase of embryogenesis. Cerberus (Cer) is a gene that encodes a cytokine (a secreted signaling protein) important for induction and formation of the heart and head in vertebrates. The Cerberus gene encodes a polypeptide that is 270 amino acids in length and is expressed in the anterior domain of a gastrula in the endoderm layer. Cerberus also plays a large role as an inhibitory molecule, which is important for proper head induction. Cerberus inhibits the proteins bone morphogenetic protein 4 (BMP4), Xnr1, and Xwnt8. This gene encodes a cytokine member of the cystine knot superfamily, characterized by nine conserved cysteines and a cysteine knot region. The cerberus-related cytokines, together with Dan and DRM / Gremlin, represent a group of bone morphogenetic protein (BMP) antagonists that can bind directly to BMPs and inhibit their activity. In human embryonic development, Cerberus and the protein coded by GREM3 inhibit NODAL in the Wnt signaling pathway during the formation of the germ layers. Specifically, Cerberus and GREM3 act as antagonists to Nodal in the anterior region of the developing embryo, blocking its expression and halting the progression of the primitive node. Orthologs of the gene that codes Cerberus (CER1) are conserved in other non-rodent mammals, indicating that Cerberus has similar functions in other vertebrates. A gene knockdown experiment was conducted in Xenopus, where the amount of Cerberus expressed was decreased by inhibiting translation. The proteins that Cerberus inhibits (BMP4, Xnr1, Xwnt8) concentrations were increased also. It was also shown that just the decrease of Cerberus translation alone was not enough to inhibit the formation of head structures. While the increase of just BMP4, Xnr1, Xwnt8 led to defects in the formation of the head. The increase of BMP4, Xnr1, Xwnt8 and the decrease of Cerberus together blocked the formation of the head. This gene knockdown experiment showed the necessity of Cerberus’ inhibitory functions in the formation of head structures. It quite possibly may be that although Cerberus is necessary for the induction of a head, its inhibitory actions may play a more significant role in ensuring the head is developed properly. Overexpression or overabundance of Cerberus is associated with the development of ectopic heads. These additional head-like structures may contain varying characteristics of a normal head (eye or eyes, brain, notochord) depending on the ratio of overabundant Cerberus to other proteins associated with anterior development that Cerberus inhibits (Wnt, Nodal, and BMP). If only Nodal is blocked, a single head will still form but with abnormalities such as cyclopia. If both Nodal and BMP or Wnt and BMP are sufficiently inhibited, ectopic, abnormal head-like structures will form. Inhibition of all three proteins by Cerberus is required for the development of complete, ectopic heads. Location It is expressed in the anterior endoderm but can vary dorsally and ventrally between species. For example, in amphibians Cerberus is expressed in the anterior dorsal endoderm and in mice it is expressed in the anterior visceral endoderm. Anterior-posterior patterning Anterior-posterior patterning by Cerberus is accomplished by acting as an antagonist to nodal, bmp, and wnt signaling molecules in the anterior region of the vertebrate embryo during gastrulation. Knock down experiments in which Cerberus was partially repressed show a decreased formation of the head structures. In experiments where Cerberus was decreased and wnt, bmp and nodal signals were increased, embryos completely lacked head structures and develop only trunk structures. These experiments suggest that a balance of these signaling molecules is required for proper development of the anterior and posterior regions. Left-right asymmetry Cerberus is also involved in establishing left-right asymmetry that is critical to the normal physiology of a vertebrate. By blocking nodal in the right side of the embryo, concentrations of nodal remain high only in the left side of the embryo and the nodal cascade cannot be activated in the right side. Because left-right asymmetry is so vital, Cerberus works along with the nodal cilia that push left-determining signal molecules to the left side of the embryo to ensure that the left-right axis is correctly established. Misexpression experiments show that lack of Cerberus expression on the right side can result in situs inversus and cardiovascular malformations. Heart development Cerberus plays a vital role in heart development and differentiation of cardiac mesoderm through activation of Nodal signaling molecule. Nodal and Wnt activity is antagonized in the endoderm which results in diffusible signals from Cerberus. More specifically, Nodal inhibits certain cells from joining cardiogenesis while simultaneously activating cells. The cells that respond to Nodal produce Cerberus in the underlying endoderm which causes heart development in adjacent cells. Knockdown experiments of Cerberus reduced endogenous cardiomyogenesis and ectopic heart induction. Block of Nodal leads to induction of cardiogenic genes through chromatin remodeling. The heart is developed asymmetrically using the left-right patterning induced by Cerberus which creates a higher concentration of signaling molecules on the left side. Experiments that inhibited Cerberus led to a loss of left-right polarity of the heart, which was shown by bilateral expression of left side-specific genes. During mammalian heart induction, a mammalian homologue, Cer1, is associated with the coordinated suppression of the TGFbeta superfamily members Nodal and BMP. This induces Brahma-associated factor 60c (Baf60c), one of three Baf60 variants (a, b, and c) that are mutually exclusively assembled into the SWI/SNF chromatin remodelling complex. Blocking Nodal and BMP also induces lineage-specific transcription factors Gata4 and Tbx5, which interact with Baf60c. Collectively, these proteins redirect SWI/SNF to activate the cardiac program of gene expression. Targeted inactivation of another homologue, Cerberus like-2 (Cerl2), in the mouse leads to left ventricular cardiac hyperplasia and systolic dysfunction. Evolutionary role and conservation The Nodal signaling pathway, including Cerberus, is evolutionary conserved. It is theorized that the gut was the first asymmetrical organ to develop, but in modern vertebrates, most internal organs display asymmetry. While the Nodal pathway is found in deuterostomes and protostomes, a proposed common ancestor called Urbilateria has been theorized to be the progenitor of all bilaterally symmetrical animals. The only protostomes to possess Nodal are mollusks (including snails), while the vast majority of deuterostomes possess this signaling pathway. Cerberus is present in the signaling pathway of amphioxus, an early chordate. As a result, it is likely that the majority of vertebrates possess Cerberus or analogous molecules (such as Coco in frogs, Dand5 in mice, and charon in zebrafish). Notably, chickens lack the ciliary dependent mechanisms of Nodal distribution, but Nodal and Cerberus are still an integral part of their asymmetrical L-R development. Pigs also lack this ciliary mechanism, but both species rely on an ion pump to accomplish L-R distribution of Nodal. Cerberus's (and analogous molecules') role in this pathway is to bind to Nodal in an inhibitory manner. References Further reading External links Cytokines
Cerberus (protein)
[ "Chemistry" ]
2,091
[ "Cytokines", "Signal transduction" ]
11,835,159
https://en.wikipedia.org/wiki/Polypyrimidine%20tract-binding%20protein
Polypyrimidine tract-binding protein, also known as PTB or hnRNP I, is an RNA-binding protein. PTB functions mainly as a splicing regulator, although it is also involved in alternative 3' end processing, mRNA stability and RNA localization. Two 2020 studies have shown that depleting PTB mRNA in astrocytes can convert these astrocytes to functional neurons. These studies also show that such a treatment can be applied to the substantia nigra of mice models of Parkinson's disease in order to convert astrocytes to dopaminergic neurons and as a consequence restore motor function in these mice. See also Polypyrimidine tract References External links
Polypyrimidine tract-binding protein
[ "Chemistry" ]
145
[ "Biochemistry stubs", "Protein stubs" ]
11,836,131
https://en.wikipedia.org/wiki/Religious%20tourism
Religious tourism, spiritual tourism, sacred tourism, or faith tourism, is a type of tourism with two main subtypes: pilgrimage, meaning travel for religious or spiritual purposes, and the viewing of religious monuments and artefacts, a branch of sightseeing. Types Religious tourism has been characterised in different ways by researchers. Gisbert Rinschede distinguishes these by duration, by group size, and by social structure. Juli Gevorgian proposes two categories that differ in their motivation, namely "pilgrimage tourism" for spiritual reasons or to participate in religious rites, and "church tourism" to view monuments such as cathedrals. The Christian priest Frank Fahey writes that a pilgrim is "always in danger of becoming a tourist", and vice versa since travel always in his view upsets the fixed order of life at home, and identifies eight differences between the two: Pilgrimage Pilgrimage is spiritually- or religiously motivated travel, sometimes over long distances; it has been practised since antiquity and in several of the world's religions. The world's largest mass religious assemblage takes place in India at the Kumbh Mela, which attracts over 120 million pilgrims. Other major pilgrimages include the annual Hajj to Mecca, required once in a Muslim's life. These journeys often involve elaborate rituals and rites, reflecting the deep significance and varied traditions associated with pilgrimage in different cultures and faiths. Religious sightseeing Religious sightseeing can be motivated by various interests, including religion, art, architecture, history, and personal ancestry. People can find holy places interesting and moving, whether they personally are religious or not. Some, such as the churches of Italy, offer fine architecture and major artworks. Portugal, for example, has as its main religious tourism attraction the Sanctuary of Our Lady of Fátima, internationally known by the phenomenon of Marian apparitions. Others are important to world religions: Jerusalem holds a central place in Judaism, Christianity, and Islam. Others again may be both scenic and important to one religion, like the Way of Saint James in Spain, but have been adopted by non-religious people as a personal challenge and indeed as a journey of self-discovery. Religious tourism in India can take many forms, including yoga tourism; the country has sites important to Hinduism, Buddhism, Sikhism and Islam as well as magnificent architecture and, for some travellers, the attraction of orientalism. Japan too offers beautiful religious places from Buddhist temples to Shinto shrines. Secular pilgrimage A category intermediate between pilgrims belonging to a major world religion and pure tourism is the modern concept of secular pilgrimage to places such as the Himalayas felt to be in some way special or even sacred, and where the travel is neither purely pious, nor purely for pleasure, but is to some degree "compromised". For example, New Age believers may travel to such "spiritual hotspots" with the intention of healing themselves and the world. They may practise rituals involving leaving their bodies, possession by spirits (channelling), and recovery of past life memories. The travel is considered by many scholars as transcendental, a life learning process or even a self-realization metaphor. See also Christian tourism Devotional articles Halal tourism Kosher tourism Char Dham Yatra References Further reading Ralf van Bühren, Lorenzo Cantoni, and Silvia De Ascaniis (eds.), Special issue on “Tourism, Religious Identity and Cultural Heritage”, in Church, Communication and Culture 3 (2018), pp. 195–418 Razaq Raj and Nigel D. Morpeth, Religious tourism and pilgrimage festivals management: an international perspective, CABI, 2007 Dallen J. Timothy and Daniel H. Olsen, Tourism, religion and spiritual journeys, Routledge, 2006 University of Lincoln (Department of tourism and recreation), Tourism – the spiritual dimension. Conference. Lincoln (Lincolnshire) 2006 N. Ross Crumrine and E. Alan Morinis, Pilgrimage in Latin America, Westport CT 1991 External links Encyclopedia of Religion and Society: Pilgrimage/Tourism (history from ancient times) USA TODAY: 10 Great Places to Mark Christianity's Holiest Day (on Christian sacred places such as St Peter's, Rome, St John's cave on Patmos, and the grotto at Lourdes) CBS Early Show: Rest, relaxation, & religion USA TODAY: On a wing and a prayer (on James Dobson and Focus on the Family in Colorado) Washington Post: Seeking answers with field trips in faith (on Our Lady of Medjugorje, Bosnia) Religious practices Types of tourism
Religious tourism
[ "Biology" ]
929
[ "Behavior", "Religious practices", "Human behavior" ]
11,837,472
https://en.wikipedia.org/wiki/Djuice
Djuice (short for 'digital juice') was a youth-based mobile phone plan from Telenor. History Djuice was launched in Bangladesh by Grameenphone on 14 April 2005, during the Bengali New Year. In October 2006, Djuice was launched by Telenor Group of Pakistan. On 14 April 2007, Djuice Bangladesh rebranded to Djuice. Djuice Bangladesh has a show called . In 2012, the brand Djuice was merged with Telenor, and all the balances of the Djuice customers accounts were confiscated by Telenor. In October 2013, Ukrainian mobile operator Kyivstar stopped offering mobile plans under the brand Djuice. References Mobile telecommunication services Telenor
Djuice
[ "Technology" ]
151
[ "Mobile telecommunications", "Mobile telecommunication services" ]
11,838,011
https://en.wikipedia.org/wiki/Projection%20panel
A Projection panel (also called overhead display or LCD panel) is a device that, although no longer in production, was used as a data projector is today. It works with an overhead projector. The panel consists of a translucent LCD, and a fan to keep it cool. The projection panel sits on the bed of the overhead projector, and acts like a piece of transparency. The panels have a VGA input, and sometimes Composite (RCA) and S-Video input. Later models have remotes, with functions such as 'freeze' which lets you freeze the image, useful for when you want to leave something on the screen whilst you do other things. Earlier models only had 640x480 resolution, while newer ones had up to SVGA resolution. Proxima, one maker of the panels, included a magic wand and sensor, which worked with the sensor detecting where you put the wand, to create and interactive effect, the equivalent of today's smart boards. Although they are not produced anymore, used panels can be purchased for a fraction of the price of a data projector. The panels are quite dim, as they do not let a great deal of light through, so brightness can be a problem, even with a powerful overhead projector. References Display technology
Projection panel
[ "Engineering" ]
261
[ "Electronic engineering", "Display technology" ]
11,838,043
https://en.wikipedia.org/wiki/Operation%3A%20Bot%20Roast
Operation: Bot Roast is an operation by the FBI to track down bot herders, crackers, or virus coders who install malicious software on computers through the Internet without the owners' knowledge, which turns the computer into a zombie computer that then sends out spam to other computers from the compromised computer, making a botnet or network of bot infected computers. The operation was launched because the vast scale of botnet resources poses a threat to national security. The operation was created to disrupt and disassemble bot herders. In June 2007, the FBI had identified about 1 million computers that were compromised, leading to the arrest of the persons responsible for creating the malware. In the process, owners of infected computers were notified, many of whom were unaware of the exploitation. Some early results of the operation include charges against the following: Robert Matthew Bentley (known as "lsdigital") of Panama City Florida, pleaded guilty to charges of computer fraud and conspiracy to commit computer fraud for using botnets to install advertising software. Robert Alan Soloway of Seattle, Washington, pleaded guilty to charges of using botnets to send tens of millions of spam messages touting his website. Jeanson James Ancheta pleaded guilty to controlling thousands of infected computers. Jason Michael Downey (known as "Nessun"), founder of the IRC network Rizon, is charged with using botnets to disable other systems. Akbot author Owen Walker (known as "AKILL") of New Zealand, was tried for various crimes and discharged by the prosecution in 2008. References Botnets Computer security exploits Federal Bureau of Investigation operations
Operation: Bot Roast
[ "Technology" ]
331
[ "Computer security exploits" ]
11,838,733
https://en.wikipedia.org/wiki/List%20of%20information%20systems%20journals
The following is a list of information systems journals, containing academic journals that cover information systems. The list given here contains the most influential, currently publishing journals in the field. To understand which are the best journals for a particular Information Systems (IS) field of study, one needs to understand that IS is a multidisciplinary research area and that the "IS discipline draws on the social science as well as the engineering research traditions. The social science tradition is represented by the economics-based and behavioral research, whereas the engineering tradition is epitomized by the design science approach in IS research." Top management information systems journals The following journals were selected by the Association for Information Systems Senior Scholars as a top basket of journals. European Journal of Information Systems Information and Organization Information Systems Journal Information Systems Research Journal of the Association for Information Systems Journal of Information Technology Journal of Management Information Systems Journal of Strategic Information Systems Management Information Systems Quarterly Management journals that publish information systems research Electronic Markets Management Science Organization Science Top information systems journals with an engineering tradition, epitomized by the design science approach Business & Information Systems Engineering References Information systems journals
List of information systems journals
[ "Technology" ]
227
[ "Information systems journals", "Information systems" ]
11,839,239
https://en.wikipedia.org/wiki/Boord%20olefin%20synthesis
The Boord olefin synthesis is an organic reaction forming alkenes from ethers carrying a halogen atom 2 carbons removed from the oxygen atom (β-halo-ethers) using a metal such as magnesium or zinc. The reaction, discovered by Cecil E. Boord in 1930 is a classic named reaction with high yields and broad scope. The reaction type is an elimination reaction with magnesium forming an intermediate Grignard reagent. The alkoxy group is a poor leaving group and therefore an E1cB elimination reaction mechanism is proposed. The original publication describes the organic synthesis of the compound isoheptene in several steps. In a 1931 publication the scope is extended to 1,4-dienes with magnesium replaced by zinc (see also: Barbier reaction). In the first part of the reaction the allyl Grignard acts as a nucleophile in nucleophilic aliphatic substitution. References Olefination reactions Organic reactions Name reactions
Boord olefin synthesis
[ "Chemistry" ]
202
[ "Name reactions", "Olefination reactions", "Organic reactions" ]
11,839,689
https://en.wikipedia.org/wiki/RMON
The Remote Network Monitoring (RMON) MIB was developed by the IETF to support monitoring and protocol analysis of local area networks (LANs). The original version (sometimes referred to as RMON1) focused on OSI layer 1 and layer 2 information in Ethernet and Token Ring networks. It has been extended by RMON2 which adds support for Network- and Application-layer monitoring and by SMON which adds support for switched networks. It is an industry-standard specification that provides much of the functionality offered by proprietary network analyzers. RMON agents are built into many high-end switches and routers. Overview Remote Monitoring (RMON) is a standard monitoring specification that enables various network monitors and console systems to exchange network-monitoring data. RMON provides network administrators with more freedom in selecting network-monitoring probes and consoles with features that meet their particular networking needs. An RMON implementation typically operates in a client/server model. Monitoring devices (commonly called "probes" in this context) contain RMON software agents that collect information and analyze packets. These probes act as servers and the Network Management applications that communicate with them act as clients. While both agent configuration and data collection use SNMP, RMON is designed to operate differently than other SNMP-based systems: Probes have more responsibility for data collection and processing, which reduces SNMP traffic and the processing load of the clients. Information is only transmitted to the management application when required, instead of continuous polling and monitoring In short, RMON is designed for "flow-based" monitoring, while SNMP is often used for "device-based" management. RMON is similar to other flow-based monitoring technologies such as NetFlow and SFlow because the data collected deals mainly with traffic patterns rather than the status of individual devices. One disadvantage of this system is that remote devices shoulder more of the management burden, and require more resources to do so. Some devices balance this trade-off by implementing only a subset of the RMON MIB groups (see below). A minimal RMON agent implementation could support only statistics, history, alarm, and event. The RMON1 MIB consists of ten groups: Statistics: real-time LAN statistics e.g. utilization, collisions, CRC errors History: history of selected statistics Alarm: definitions for RMON SNMP traps to be sent when statistics exceed defined thresholds Hosts: host specific LAN statistics e.g. bytes sent/received, frames sent/received Hosts top N: record of N most active connections over a given time period Matrix: the sent-received traffic matrix between systems Filter: defines packet data patterns of interest e.g. MAC address or TCP port Capture: collect and forward packets matching the Filter Event: send alerts (SNMP traps) for the Alarm group Token Ring: extensions specific to Token Ring The RMON2 MIB adds ten more groups: Protocol Directory: list of protocols the probe can monitor Protocol Distribution: traffic statistics for each protocol Address Map: maps network-layer (IP) to MAC-layer addresses Network-Layer Host: layer 3 traffic statistics, per each host Network-Layer Matrix: layer 3 traffic statistics, per source/destination pairs of hosts Application-Layer Host: traffic statistics by application protocol, per host Application-Layer Matrix: traffic statistics by application protocol, per source/destination pairs of hosts User History: periodic samples of user-specified variables Probe Configuration: remote configure of probes RMON Conformance: requirements for RMON2 MIB conformance Important RFCs RMON1: RFC 2819 - Remote Network Monitoring Management Information Base RMON2: RFC 4502 - Remote Network Monitoring Management Information Base Version 2 using SMIv2 HCRMON: RFC 3273 - Remote Network Monitoring Management Information Base for High Capacity Networks SMON: RFC 2613 - Remote Network Monitoring MIB Extensions for Switched Networks Overview: RFC 3577 - Introduction to the RMON Family of MIB Modules See also Network performance management Network tap External links RMON: Remote Monitoring MIBs RAMON: open-source implementation of a RMON2 agent Network management
RMON
[ "Engineering" ]
843
[ "Computer networks engineering", "Network management" ]
11,840,047
https://en.wikipedia.org/wiki/Geloso
Geloso, founded in 1931 by Giovanni Geloso, was an Italian manufacturer of radios, televisions, amplifiers, amateur radio receivers, audio equipment and electronic components. its headquarters were situated in Milan, Viale Brenta 29. History In 1931 the company started the production not only of radio sets but also most of the electronic components with which they were built and, over time, also developed and patented many others. After the Second World War, Geloso expanded and expanded his production, becoming from 1950 onwards, a point of reference for enthusiasts of consumer electronics and hobbyists.The many products under the brand name Geloso were known throughout Italy and much appreciated abroad. The output consisted of innovative products known for their high quality, solid construction and reasonable price. The main production consisted of radios, amplifiers, tape recorders, televisions, kits, and professional laboratory instruments. These were complemented by components such as capacitors, resistors, potentiometers, switches, connectors, transformers and microphones. At death in 1969 of the founder, Geloso had become an empire of eight production plants, with a capillary and efficient sales network. Production continued production until 1972, when it closed permanently. There were several reasons for this closure: fierce foreign competition, managerial problems, union demands and massive indebtedness to banks. Emblematic the definition of "Neo to be erased" given by Vittorio Valletta (a person notoriously linked to Mediobanca) in relation to the electronic sector of Olivetti. Geloso Technical Bulletin Geloso was considered a good businessman, but also someone who wanted to share his passion for electronics. In 1931, he produced a free quarterly publication known as the 'GELOSO Technical Bulletin'. This contained everything needed for the repair and development of its equipment, but also and especially, tips, instructions, characteristics, circuit diagrams and everything that technicians and enthusiasts needed to know.Those were the years when there were no training centres; moreover schools specialising in electronics were extremely rare. These technical bulletins had the merit of spreading, in a simple and clear manner, knowledge to people who otherwise would not have been able to learn and develop their passion. Assembly kits Geloso's contribution to the knowledge and popularization of radio technology was considerable, thanks mainly to the assembly kits that enabled the purchaser to build a television or radio receiver, or even radio amateur equipment, almost from scratch.The starting point was the provided metal chassis onto which the components were fitted. Some pre-assembled and pre-calibrated parts facilitated the work. By following the instructions in the bulletins, the entire set could follow a final calibration and everything was completed and ready to be fitted into a wooden cabinet with knobs, buttons, etc., all marked Geloso. Geloso S. p. A. has been manufacturer of Amateur radio equipment between 1931 and 1972. Some of Geloso's .most successful products were: radio receivers, tape recorders, audio amplifiers, record players, television sets, radio and TV parts, ham receivers and transmitters. See also List of Italian Companies References External links Geloso specifications + images Geloso Story Download Bollettini Tecnici Geloso Telecommunications companies of Italy Electronics companies of Italy Electronics companies established in 1931 Italian companies established in 1931 Amateur radio companies Italian brands Manufacturing companies disestablished in 1972 1972 disestablishments in Italy Radio manufacturers
Geloso
[ "Engineering" ]
694
[ "Radio electronics", "Radio manufacturers" ]
11,840,868
https://en.wikipedia.org/wiki/Entropy%20power%20inequality
In information theory, the entropy power inequality (EPI) is a result that relates to so-called "entropy power" of random variables. It shows that the entropy power of suitably well-behaved random variables is a superadditive function. The entropy power inequality was proved in 1948 by Claude Shannon in his seminal paper "A Mathematical Theory of Communication". Shannon also provided a sufficient condition for equality to hold; Stam (1959) showed that the condition is in fact necessary. Statement of the inequality For a random vector X : Ω → Rn with probability density function f : Rn → R, the differential entropy of X, denoted h(X), is defined to be and the entropy power of X, denoted N(X), is defined to be In particular, N(X) = |K| 1/n when X is normal distributed with covariance matrix K. Let X and Y be independent random variables with probability density functions in the Lp space Lp(Rn) for some p > 1. Then Moreover, equality holds if and only if X and Y are multivariate normal random variables with proportional covariance matrices. Alternative form of the inequality The entropy power inequality can be rewritten in an equivalent form that does not explicitly depend on the definition of entropy power (see Costa and Cover reference below). Let X and Y be independent random variables, as above. Then, let X' and Y' be independently distributed random variables with gaussian distributions, such that and Then, See also Information entropy Information theory Limiting density of discrete points Self-information Kullback–Leibler divergence Entropy estimation References Information theory Probabilistic inequalities Statistical inequalities
Entropy power inequality
[ "Mathematics", "Technology", "Engineering" ]
348
[ "Theorems in statistics", "Telecommunications engineering", "Applied mathematics", "Statistical inequalities", "Theorems in probability theory", "Computer science", "Probabilistic inequalities", "Information theory", "Inequalities (mathematics)" ]
11,841,724
https://en.wikipedia.org/wiki/HD%20222582
|- style="background-color: #A0B0FF;" colspan="3" | Planet |- bgcolor="#FFFAFA" | HD 222582 b || Data Simbad |- bgcolor="#FFFAFA" | || Data ExoPlanet HD 222582 is a multiple star system in the equatorial constellation of Aquarius. It is invisible to the naked eye with an apparent visual magnitude of 7.7, but can be viewed with binoculars or a small telescope. The system is located at a distance of 137 light years from the Sun based on parallax, and it is drifting further away with a radial velocity of +12 km/s. It is located close enough to the ecliptic that it is subject to lunar occultations. The primary member of this system, designated component A, is an ordinary G-type main-sequence star with a stellar classification of G5V. The physical properties of the star are similar enough to the Sun that it is considered a candidate solar twin. It is about 6.5 billion years old with an inactive chromosphere and is spinning with a low projected rotational velocity of 1.7 km/s. The mass and metallicity of this star are essentially the same as the Sun. It has a 14% larger radius and is radiating 1.3 times the luminosity of the Sun from its photosphere at an effective temperature of 5,790 K. Component B of this system is a close binary system with the components designated HD 222582 Ba and Bb. The pair have a combined class of M4.5 V+ and about 20% the mass of the Sun. Planetary system In November 1999, a dense superjovian planet was announced orbiting the primary by the California and Carnegie Planet Search. Designated component 'b', it was discovered using the radial velocity method, using 24 observations over a period of 1.5 years. The exoplanet is orbiting with a period of and a very large eccentricity of 0.76, ranging in distance from out to away from the primary. See also HD 224693 List of exoplanets discovered before 2000 - HD 222582 b References External links G-type main-sequence stars M-type main-sequence stars Solar twins Planetary systems with one confirmed planet Triple stars Aquarius (constellation) BD-06 6262 222582 116906 J23415154-0559086
HD 222582
[ "Astronomy" ]
517
[ "Constellations", "Aquarius (constellation)" ]
11,843,393
https://en.wikipedia.org/wiki/Clock%20angle%20problem
Clock angle problems are a type of mathematical problem which involve finding the angle between the hands of an analog clock. Math problem Clock angle problems relate two different measurements: angles and time. The angle is typically measured in degrees from the mark of number 12 clockwise. The time is usually based on a 12-hour clock. A method to solve such problems is to consider the rate of change of the angle in degrees per minute. The hour hand of a normal 12-hour analogue clock turns 360° in 12 hours (720 minutes) or 0.5° per minute. The minute hand rotates through 360° in 60 minutes or 6° per minute. Equation for the angle of the hour hand where: is the angle in degrees of the hand measured clockwise from the 12 is the hour. is the minutes past the hour. is the number of minutes since 12 o'clock. Equation for the angle of the minute hand where: is the angle in degrees of the hand measured clockwise from the 12 o'clock position. is the minute. Example The time is 5:24. The angle in degrees of the hour hand is: The angle in degrees of the minute hand is: Equation for the angle between the hands The angle between the hands can be found using the following formula: where is the hour is the minute If the angle is greater than 180 degrees then subtract it from 360 degrees. Example 1 The time is 2:20. Example 2 The time is 10:16. When are the hour and minute hands of a clock superimposed? The hour and minute hands are superimposed only when their angle is the same. is an integer in the range 0–11. This gives times of: 0:00, 1:05., 2:10., 3:16., 4:21., 5:27.. 6:32., 7:38., 8:43., 9:49., 10:54., and 12:00. (0. minutes are exactly 27. seconds.) See also Clock position References External links https://web.archive.org/web/20100615083701/http://delphiforfun.org/Programs/clock_angle.htm http://www.ldlewis.com/hospital_clock/ - extensive clock angle analysis https://web.archive.org/web/20100608044951/http://www.jimloy.com/puzz/clock1.htm Mathematics education Elementary mathematics Elementary geometry Mathematical problems Clocks
Clock angle problem
[ "Physics", "Mathematics", "Technology", "Engineering" ]
534
[ "Machines", "Clocks", "Measuring instruments", "Physical systems", "Elementary mathematics", "Elementary geometry", "Mathematical problems" ]
11,844,409
https://en.wikipedia.org/wiki/Tree%20hollow
A tree hollow or tree hole is a semi-enclosed cavity which has naturally formed in the trunk or branch of a tree. They are found mainly in old trees, whether living or not. Hollows form in many species of trees, and are a prominent feature of natural forests and woodlands, and act as a resource or habitat for fungi and a number of vertebrate and invertebrate animals. Hollows may form as the result of physiological stress from natural forces causing the excavating and exposure of the heartwood. Forces may include wind, fire, heat, lightning, rain, attack from insects (such as ants or beetles), bacteria, or fungi. Also, trees may self-prune, dropping lower branches as they reach maturity, exposing the area where the branch was attached. Many animals further develop the hollows using instruments such as their beak, teeth or claws. The size of hollows may depend on the age of the tree. For example, eucalypts develop hollows at all ages, but only from when the trees are 120 years old do they form hollows suitable for vertebrates, and it may take 220 years for hollows suitable for larger species to form. Hollows in fallen timber are also very important for animals such as echidnas, numbats, chuditch and many reptiles. In streams, hollow logs may be important to aquatic animals for shelter and egg attachment. Hollows are an important habitat for many wildlife species, especially where the use of hollows is obligate, as this means no other resource would be a feasible substitute. Animals may use hollows as diurnal or nocturnal shelter sites, as well as for rearing young, feeding, thermoregulation, and to facilitate ranging behaviour and dispersal. While use may also be opportunistic, rather than obligate, it may be difficult to determine the nature of a species' relationship to hollows—it may vary across a species' range, or depend on climatic conditions. Animals will select a hollow based on factors including entrance size and shape, depth, and degree of insulation. Such factors greatly affect the frequency and seasonality of hollow use. Especially in Europe, entomologists are interested in the use of hollows by invertebrates. One beetle associated with hollow trees, Osmoderma eremita, has been given the highest priority according to the European Union's Habitat Directive. Description A tree hollow is a cavity in a living tree. Tree holes can be caused when an injury to the tree, such as breakage of a limb, creates an opening through the bark and exposes the sapwood. The sapwood is attacked by fungi and bacteria, which form a cavity in the bole of the tree. The resulting cavity can fill with water, thus becoming a type of phytotelma. Therefore, there are wet and dry tree holes. Tree holes are important habitats for many animals, such as Ceratopogonidae, Chironomidae, the common merganser, toucans, woodpeckers, and bluebirds. Tree holes can be important in the maintenance and spread of some diseases, for example La Crosse encephalitis. Hollows may be an adaptive trait for trees as animals provide the host tree with fertilizer. Non-excavated hollows Contrary to excavated hollows that are directly caused by animals such as woodpeckers, non-excavated hollows are hollows that are created after a tree has been damaged and decays because of fungal growth. This damage can be caused by insects, foraging birds, fire, lightning, snow, frost, and physical abrasion from rocks, falling trees, dendrotelms, and, circumstantially, large herbivores such as red deer (Cervus elaphus) or the European bison (Bison bonasus). In North America, woodpeckers play a keystone role creating holes for other birds or mammals. In Asia, data is deficient but here woodpeckers do not seem to be as much of a keystone group. In Europe, woodpeckers are not seen as keystone species, as most cavities are non-excavated and these are where the majority of hole-nesting songbirds choose to nest. Types of non-excavated hollows Knotholes: Created from a branch snapping off of a main stem Chimneys: Created when a stem snaps and makes an upward-facing entrance Cracks: Created when a trunk splits Trunk holes: Created when a cavity forms in the main stem. If it is narrow and elongate, it is called a slit. Artificial hollows Animals have been found to use artificial structures as substitutes for hollows. For example, pygmy possums in the chute of a grain silo; or pardalotes in the top, horizontal pipe of a children's swing. Purpose-built nest boxes, such as birdhouses and bat tubes, are also constructed for conservation and for wildlife observation. The size of the nest box, entry hole and placement height may be chosen in consideration of certain species. However, nestboxes have different microclimatic conditions and can therefore not be treated as direct substitutes. Natural hollows are generally preferred for habitat conservation. Actual tree hollows can be created artificially by cutting into trees with chainsaws and partly covering the resulting hollows with timber faceplates. These are readily used by arboreal animals including mammals and birds. Compared to nest boxes, they last longer and give better protection from external temperatures. Around the world Conservation of hollow-using fauna is an issue in many parts of the world. In North America, recovery of the eastern bluebird (Sialia sialis) has required nest boxes due to the loss of natural hollows. The scarcity of dead, hollow-bearing trees in Scandinavian forests is a key threatening process to native bird life. In Sweden, almost half of red-listed species are dependent on dead hollow-bearing trees or logs. Australia In Australia, 304 vertebrate species are known to use tree hollows in Australia: 29 amphibians, 78 reptiles, 111 birds, 86 mammals. Approximately 100 of these are now rare, threatened or near-threatened on Australian State or Commonwealth legislation, in part because of the removal of hollow-bearing trees. Threats to hollows include: native forest silviculture, firewood collection, rural dieback (such as from inundation and salinity), grazing by cattle, and land clearing. Additionally, pest and introduced species such as the common myna and western honey bee (Apis mellifera) compete with native species for hollows; domestic and feral cats and black rats prey on hollow-using animals and have been damaging especially to island populations; and some native hollow-using species have increased population densities or expanded their ranges since European settlement, such as the galah, common brushtail possum and the little corella and compete with less common native species. Russia, China, Korea Asian black bears, also known as Himalayan bears (Lat.: Ursus thibetanus), in northern parts of their range, such as Russian province Primorye, China, and both Koreas, prefer spend winter periods in large tree hollows, where females also give birth to cubs. Threats include massive deforestation in these countries, combined with direct poaching of wintering bears—with selective destruction of the best hollow trees. In Russia, attempts (sometimes successful) are made to restore such broken trees. Unfortunately, only a small portion of all damaged trees can be restored in Primorye, where forests are basically logged without taking to account needs of large fauna. Gallery See also Nest box Snag Tree throw References External links Life in a Tree Hole. Nature Bulletin No. 581. November 21, 1959. Forest Preserve District of Cook County Research - Tree holes / Phytotelmata de:Baum-Mikrohabitat#Baumhöhlen Plant morphology Dead wood
Tree hollow
[ "Biology" ]
1,648
[ "Plant morphology", "Plants" ]
11,845,078
https://en.wikipedia.org/wiki/1972%20Chicago%20commuter%20rail%20crash
A collision between two commuter trains in Chicago occurred during the cloudy morning rush hour on October 30, 1972, and was the worst such crash in Chicago's history. Illinois Central Gulf Train 416, made up of newly purchased Highliners, overshot the 27th Street station on what is now the Metra Electric Line, and the engineer asked and received permission from the train's conductor to back the train to the platform. This move was then made without the flag protection required by the railroad's rules. The train's crew had not used a flagman before, and while it was a prescribed practice, it had fallen out of use. Instead, the conductor and the engineer worked in concert to back up the train, with the curve in the track partially blocking the view. Train 416 passed the automatic block signals, which cleared express Train 720, made up of more heavily constructed single level cars, to continue at full speed on the same track. The engineer of the express train did not see the bilevel train backing up until it was too late. When the trains collided, the front car of the express train telescoped the rear car of the bilevel train, killing 45 people and injuring 332. The death toll could have been higher, but the accident occurred near Michael Reese Hospital (which later moved) and Mercy Hospital. Later investigations showed that Train 720 likely could have seen the red light for Train 416 and avoided a collision if it was traveling slower (30 mph). It is estimated that Train 720 hit Train 416 at about 44-50 mph. References External links Final accident report in full - NTSB - On ROSAP website Collision of Illinois Central Gulf Railroad Commuter Trains: Investigation Summary and Recommendations List of crash dead identified - Clipping from Chicago Tribune - Newspapers.com Works cited Railway accidents in 1972 Railway accidents and incidents in Illinois 1970s in Chicago Accidents and incidents involving Illinois Central Railroad 1972 in Illinois October 1972 events in the United States
1972 Chicago commuter rail crash
[ "Technology" ]
395
[ "Railway accidents and incidents", "Rail accident stubs" ]
8,693,227
https://en.wikipedia.org/wiki/Wolfgang%20M.%20Schmidt
Wolfgang M. Schmidt (born 3 October 1933) is an Austrian mathematician working in the area of number theory. He studied mathematics at the University of Vienna, where he received his PhD, which was supervised by Edmund Hlawka, in 1955. Wolfgang Schmidt is a Professor Emeritus from the University of Colorado at Boulder and a member of the Austrian Academy of Sciences and the Polish Academy of Sciences. Career He was awarded the eighth Frank Nelson Cole Prize in Number Theory for work on Diophantine approximation. He is known for his subspace theorem. In 1960, he proved that every normal number in base r is normal in base s if and only if log r / log s is a rational number. He also proved the existence of T numbers. His series of papers on irregularities of distribution can be seen in J.Beck and W.Chen, Irregularities of Distribution, Cambridge University Press. Schmidt is in a small group of number theorists who have been invited to address the International Congress of Mathematicians three times. The others are Iwaniec, Shimura, and Tate. In 1986, Schmidt received the Humboldt Research Award and in 2003, he received the Austrian Decoration for Science and Art. Schmidt holds honorary doctorates from the University of Ulm, the Sorbonne, the University of Waterloo, the University of Marburg and the University of York. In 2012 he became a fellow of the American Mathematical Society. Books Diophantine approximation. Lecture Notes in Mathematics 785. Springer. (1980 [1996 with minor corrections]) Diophantine approximations and Diophantine equations, Lecture Notes in Mathematics, Springer Verlag 2000 Equations Over Finite Fields: An Elementary Approach, 2nd edition, Kendrick Press 2004 References Further reading Diophantine approximation: festschrift for Wolfgang Schmidt, Wolfgang M. Schmidt, H. P. Schlickewei, Robert F. Tichy, Klaus Schmidt, Springer, 2008, 1933 births Living people Number theorists Recipients of the Austrian Decoration for Science and Art Institute for Advanced Study visiting scholars University of Colorado Boulder faculty Members of the Austrian Academy of Sciences Members of the Polish Academy of Sciences Fellows of the American Mathematical Society
Wolfgang M. Schmidt
[ "Mathematics" ]
431
[ "Number theorists", "Number theory" ]
8,693,793
https://en.wikipedia.org/wiki/List%20of%20chatbots
A chatbot is a software application or web interface that is designed to mimic human conversation through text or voice interactions. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades. This list of chatbots is a general overview of notable chatbot applications and web interfaces. General chatbots Historical chatbots See also The Pile (dataset), public data used to train many research models References Natural language parsing Software comparisons
List of chatbots
[ "Technology" ]
135
[ "Software comparisons", "Computing comparisons" ]
8,694,335
https://en.wikipedia.org/wiki/Biocommunication%20%28science%29
In the study of the biological sciences, biocommunication is any specific type of communication within (intraspecific) or between (interspecific) species of plants, animals, fungi, protozoa and microorganisms. Communication means sign-mediated interactions following three levels of rules (syntactic, pragmatic and semantic). Signs in most cases are chemical molecules (semiochemicals), but also tactile, or as in animals also visual and auditive. Biocommunication of animals may include vocalizations (as between competing bird species), or pheromone production (as between various species of insects), chemical signals between plants and animals (as in tannin production used by vascular plants to warn away insects), and chemically mediated communication between plants and within plants. Biocommunication of fungi demonstrates that mycelia communication integrates interspecific sign-mediated interactions between fungal organisms, soil bacteria and plant root cells without which plant nutrition could not be organized. Biocommunication of Ciliates identifies the various levels and motifs of communication in these unicellular eukaryotes. Biocommunication of Archaea represents key levels of sign-mediated interactions in the evolutionarily oldest akaryotes. Biocommunication of phages demonstrates that the most abundant living agents on this planet coordinate and organize by sign-mediated interactions. Biocommunication is the essential tool to coordinate behavior of various cell types of immune systems. Biocommunication, biosemiotics and linguistics Biocommunication theory may be considered to be a branch of biosemiotics. Whereas biosemiotics studies the production and interpretation of signs and codes, biocommunication theory investigates concrete interactions mediated by signs. Accordingly, syntactic, semantic, and pragmatic aspects of biocommunication processes are distinguished. Biocommunication specific to animals (animal communication) is considered a branch of zoosemiotics. The semiotic study of molecular genetics can be considered a study of biocommunication at its most basic level. Interpretation of abiotic indices Interpreting stimuli from the environment is an essential part of life for any organism. Abiotic things that an organism must interpret include climate (weather, temperature, rainfall), geology (rocks, soil type), and geography (location of vegetation communities, exposure to elements, location of food and water sources relative to shelter sites). Birds, for example, migrate using cues such as the approaching weather or seasonal day length cues. Birds also migrate from areas of low or decreasing resources to areas of high or increasing resources, most commonly food or nesting locations. Birds that nest in the Northern Hemisphere tend to migrate north in the spring due to the increase in insect population, budding plants and the abundance of nesting locations. During the winter birds will migrate south to not only escape the cold, but find a sustainable food source. Some plants will bloom and attempt to reproduce when they sense days getting shorter. If they cannot fertilize before the seasons change and they die then they do not pass on their genes. Their ability to recognize a change in abiotic factors allow them to ensure reproduction. Trans-organismic communication Trans-organismic communication is when organisms of different species interact. In biology the relationships formed between different species is known as symbiosis. These relationships come in two main forms - mutualistic and parasitic. Mutualistic relationships are when both species benefit from their interactions. For example, pilot fish gather around sharks, rays, and sea turtles to eat various parasites from the surface of the larger organism. The fish obtain food from following the sharks, and the sharks receive a cleaning in return. Parasitic relationships are where one organism benefits off of the other organism at a cost. For example, in order for mistletoe to grow it must leach water and nutrients from a tree or shrub. Communication between species is not limited to securing sustenance. Many flowers rely on bees to spread their pollen and facilitate floral reproduction. To allow this, many flowers evolved bright, attractive petals and sweet nectar to attract bees. In a 2010 study, researchers at the University of Buenos Aires examined a possible relationship between fluorescence and attraction. The study concluded that reflected light was much more important in pollinator attraction than fluorescence. Communicating with other species allows organisms to form relationships that are advantageous in survival, and all of these relationships are all based on some form of trans-organismic communication. Inter-organismic communication Inter-organismic communication is communication between organisms of the same species (conspecifics). Inter-organismic communication includes human speech, which is key to maintaining social structures. Dolphins communicate with one another in a number of ways by creating sounds, making physical contact with one another and through the use of body language. Dolphins communicate vocally through clicking sounds and pitches of whistling specific to only one individual. The whistling helps communicate the individual's location to other dolphins. For example, if a mother loses sight of her offspring, or when two familiar individuals cannot find each other, their individual pitches help navigate back into a group. Body language can be used to indicate numerous things such as a nearby predator, to indicate to others that food has been found, and to demonstrate their level of attractiveness in order to find a mating partner, and even more. However, mammals such as dolphins and humans are not alone communicating within their own species. Peacocks can fan their feathers in order to communicate a territorial warning. Bees can tell other bees when they have found nectar by performing a dance when they return to the hive. Deer may flick their tails to warn others in their trail that danger is approaching. Sexual communication Sexual communication is the use of biocommunication signals to facilitate sexual interaction. Sexual communication appears to have three different aspects. (1) First, signals are employed to facilitate sexual interaction between individuals. (2) Second, signals are used to facilitate outbreeding and reduce inbreeding. (3) Third, signals are used to facilitate sexual selection among potential mates. It was proposed that these three aspects of sexual communication respectively promote the repair of DNA damage in the genomes passed on to progeny, the masking of mutations in the genomes of progeny, and selection for genetic fitness in a mating partner. Examples of sexual communication have been described in bacteria, fungi, protozoa, insects, plants and vertebrates. Intra-organismic communication Intra-organismic communication is not solely the passage of information within an organism, but also concrete interaction between and within cells of an organism, mediated by signs. This could be on a cellular and molecular level. An organism's ability to interpret its own biotic information is extremely important. If the organism is injured, falls ill, or must respond to danger, it needs to be able to process that physiological information and adjust its behavior. For example, when the human body starts to overheat, specialized glands release sweat, which absorbs the heat and then evaporates. This communication is imperative to survival in many species including plant life. Plants lack a central nervous system so they rely on a decentralized system of chemical messengers. This allows them to grow in response to factors such as wind, light and plant architecture. Using these chemical messengers, they can react to the environment and assess the best growth pattern. Essentially, plants grow to optimize their metabolic efficiency. Humans also rely on chemical messengers for survival. Epinephrine, also known as adrenaline, is a hormone that is secreted during times of great stress. It binds to receptors on the surface of cells and activates a pathway that alters the structure of glucose. This causes a rapid increase in blood sugar. Adrenaline also activates the central nervous system increasing heart rate and breathing rate. This prepares the muscles for the body's natural fight-or-flight response. Organisms rely on many different means of intra-organismic communication. Whether it is through neural connections or chemical messengers (including hormones), intra-organismic biocommunication evolved to respond to threats, maintain homeostasis and ensure self preservation. Language hierarchy Given the complexity and range of biological organisms and the further complexity within the neural organization of any particular animal organism, a variety of biocommunication languages exists. A hierarchy of biocommunication languages in animals has been proposed by Subhash Kak: these languages, in order of increasing generality, are associative, re-organizational, and quantum. The three types of formal languages of the Chomsky hierarchy map into the associative language class, although context-free languages as proposed by Chomsky do not exist in real life interactions. See also Notes Biological processes Plant intelligence Animal communication
Biocommunication (science)
[ "Biology" ]
1,791
[ "Plants", "Plant intelligence", "nan" ]
8,694,410
https://en.wikipedia.org/wiki/CCL6
Chemokine (C-C motif) ligand 6 (CCL6) is a small cytokine belonging to the CC chemokine family that has only been identified in rodents. In mice, CCL6 is expressed in cells from neutrophil and macrophage lineages, and can be greatly induced under conditions suitable for myeloid cell differentiation. It is highly expressed in bone marrow cultures that have been stimulated with the cytokine GM-CSF. Some low levels of gene expression also occur in certain cell lines of myeloid origin (e.g. the immature myeloid cell lines DA3 and 32D cl3, and the macrophage cell line P388D) that can also be greatly induced in culture with GM-CSF. However, in activated T cell lines, expression of CCL6 is greatly reduced. CCL6 can also be induced in the mouse lung by the cytokine interleukin 13. Mouse CCL6 is located on chromosome 11. The cell surface receptor for CCL6 is believed to be the chemokine receptor CCR1. References Cytokines
CCL6
[ "Chemistry" ]
239
[ "Cytokines", "Signal transduction" ]
8,694,762
https://en.wikipedia.org/wiki/Information%20Presentation%20Facility
Information Presentation Facility (IPF) is a system for presenting online help and hypertext on IBM OS/2 systems. IPF also refers to the markup language that is used to create IPF content. The IPF language has its origins in BookMaster and Generalized Markup Language developed by IBM. The IPF language is very similar to the well-known HTML language, version 3.0, with a range of additional possibilities. Therefore, a trained user may use virtually any word processor when creating IPF documents. The IPF language consists of 45 basic commands. IPF files are compiled using the IPF Compiler (IPFC) into viewable INF or HLP files. IPF HLP files are distinct from the WinHelp HLP files that are prevalent in Windows. OS/2 contains a built in viewer, and there are other viewers available for other platforms. Example 1 - IBM Here is a sample of IPF markup from IBM's Information Presentation Facility Programming Guide. <nowiki> .* This is a comment line :userdoc. :title.Endangered Mammals :h1 res=001. The Manatee :p. The manatee has a broad flat tail and two flipper like forelegs. There are no back legs. The manatee's large upper lip is split in two and can be used like fingers to place food into the mouth. Bristly hair protrudes from its lips, and almost buried in its hide are small eyes, with which it can barely see. :euserdoc. </nowiki> Example 2 - PM123 User's Manual <nowiki> :lm margin=2.:font facename=Helv size=24x10. :p.:hp8.Welcome to PM123 !:ehp8. :font facename=Helv size=16x8. :p.:p. Hello and welcome to the wonderful world of digital music on OS/2. First we must congratulate you for choosing the best MPEG-audio player available for OS/2! PM123 has been in development since beginning of 1997 and has become the most advanced player on OS/2. Some of you may have used the earlier betas of PM123 and for your convenience, here are the new features in this release: .br :ul compact. :li. New skin options, allowing PM123 to be modified to just about anything. :li. Graphical :hp2.equalizer:ehp2., including pre-amplification and band mute. :li. Support for plugins, a :hp2.spectrum analyzer:ehp2. and :hp2.oscilloscope:ehp2. plugin. :li. :hp2.Playlist Manager:ehp2. for users, allowing easier managing of playlists. :li. Better HTTP streaming support: support for URLs in playlist, and M3Us for playlists. :li. Recursive directory adding. :li. Commandline and remote control of PM123. :li. General improvements in all parts of the player. :eul. .br .br :p. </nowiki> Status of IPF IPF is still used as part of OS/2's latest incarnation, ArcaOS. It is otherwise rarely used, although there are several tools that can read or write IPF files. HTMIPF: Converts HTML to IPF HyperMake: Multi-format documentation generator IPF Editor: Commercial IPF editor UDO: Open source multi-format documentation generator VyperHelp: Open source IPF editor and converter Free Pascal's documentation generator (fpdoc), can also generate OS/2's IPF output. Help Viewers The original OS/2 VIEW.EXE application NewView v2.x. This is an open source project and code is available from Netlabs. Free Pascal's text base IDE has support for various help formats - OS/2's INF format being one of them. The fpGUI Toolkit project also has an INF viewer called DocView. It is an open source project and was originally a port of NewView v2.x, but has since seen some different designs and changes. INF is also the official help file format of fpGUI Toolkit. Tools and documentation for the INF/HLP file format by Marcus Gröber External links INF file format PM123man.INF - PM123 User's Manual version 0.99 (Source File Distribution, 298 KB) Markup languages OS/2 Online help
Information Presentation Facility
[ "Technology" ]
974
[ "Computing platforms", "OS/2" ]
8,696,000
https://en.wikipedia.org/wiki/CCL19
Chemokine (C-C motif) ligand 19 (CCL19) is a protein that in humans is encoded by the CCL19 gene. This gene is one of several CC cytokine genes clustered on the p-arm of chromosome 9. Cytokines are a family of secreted proteins involved in immunoregulatory and inflammatory processes. The CC cytokines are proteins characterized by two adjacent cysteines. The cytokine encoded by this gene may play a role in normal lymphocyte recirculation and homing. It also plays an important role in trafficking of T cells in thymus, and in T cell and B cell migration to secondary lymphoid organs. It specifically binds to chemokine receptor CCR7. Chemokine (C-C motif) ligand 19 (CCL19) is a small cytokine belonging to the CC chemokine family that is also known as EBI1 ligand chemokine (ELC) and macrophage inflammatory protein-3-beta (MIP-3-beta). CCL19 is expressed abundantly in thymus and lymph nodes, with moderate levels in trachea and colon and low levels in stomach, small intestine, lung, kidney and spleen. The gene for CCL19 is located on human chromosome 9. This chemokine elicits its effects on its target cells by binding to the chemokine receptor chemokine receptor CCR7. It attracts certain cells of the immune system, including dendritic cells and antigen-engaged B cells, CCR7+ central-memory T-Cells. References Further reading External links Cytokines
CCL19
[ "Chemistry" ]
358
[ "Cytokines", "Signal transduction" ]
8,696,119
https://en.wikipedia.org/wiki/Ultraviolet%20photoelectron%20spectroscopy
Ultraviolet photoelectron spectroscopy (UPS) refers to the measurement of kinetic energy spectra of photoelectrons emitted by molecules that have absorbed ultraviolet photons, in order to determine molecular orbital energies in the valence region. Basic theory If Albert Einstein's photoelectric law is applied to a free molecule, the kinetic energy () of an emitted photoelectron is given by where h is the Planck constant, ν is the frequency of the ionizing light, and I is an ionization energy for the formation of a singly charged ion in either the ground state or an excited state. According to Koopmans' theorem, each such ionization energy may be identified with the energy of an occupied molecular orbital. The ground-state ion is formed by removal of an electron from the highest occupied molecular orbital, while excited ions are formed by removal of an electron from a lower occupied orbital. History Before 1960, virtually all measurements of photoelectron kinetic energies were for electrons emitted from metals and other solid surfaces. In about 1956, Kai Siegbahn developed X-ray photoelectron spectroscopy (XPS) for surface chemical analysis. This method uses x-ray sources to study energy levels of atomic core electrons, and at the time had an energy resolution of about 1 eV (electronvolt). The ultraviolet photoelectron spectroscopy (UPS) was pioneered by Feodor I. Vilesov, a physicist at St. Petersburg (Leningrad) State University in Russia (USSR) in 1961 to study the photoelectron spectra of free molecules in the gas phase. The early experiments used monochromatized radiation from a hydrogen discharge and a retarding potential analyzer to measure the photoelectron energies. The PES was further developed by David W. Turner, a physical chemist at Imperial College in London and then at Oxford University, in a series of publications from 1962 to 1967. As a photon source, he used a helium discharge lamp that emits a wavelength of 58.4 nm (corresponding to an energy of 21.2 eV) in the vacuum ultraviolet region. With this source, Turner's group obtained an energy resolution of 0.02 eV. Turner referred to the method as "molecular photoelectron spectroscopy", now usually "ultraviolet photoelectron spectroscopy" or UPS. As compared to XPS, UPS is limited to energy levels of valence electrons, but measures them more accurately. After 1967, commercial UPS spectrometers became available. One of the latest commercial devices was the Perkin Elmer PS18. For the last twenty years, the systems have been homemade. One of the latest in progress – Phoenix II – is that of the laboratory of Pau, IPREM developed by Dr. Jean-Marc Sotiropoulos. Application The UPS measures experimental molecular orbital energies for comparison with theoretical values from quantum chemistry, which was also extensively developed in the 1960s. The photoelectron spectrum of a molecule contains a series of peaks each corresponding to one valence-region molecular orbital energy level. Also, the high resolution allowed the observation of fine structure due to vibrational levels of the molecular ion, which facilitates the assignment of peaks to bonding, nonbonding or antibonding molecular orbitals. The method was later extended to the study of solid surfaces where it is usually described as photoemission spectroscopy (PES). It is particularly sensitive to the surface region (to 10 nm depth), due to the short range of the emitted photoelectrons (compared to X-rays). It is therefore used to study adsorbed species and their binding to the surface, as well as their orientation on the surface. A useful result from characterization of solids by UPS is the determination of the work function of the material. An example of this determination is given by Park et al. Briefly, the full width of the photoelectron spectrum (from the highest kinetic energy/lowest binding energy point to the low kinetic energy cutoff) is measured and subtracted from the photon energy of the exciting radiation, and the difference is the work function. Often, the sample is electrically biased negative to separate the low energy cutoff from the spectrometer response. Gas discharge lines Outlook UPS has seen a considerable revival with the increasing availability of synchrotron light sources that provide a wide range of monochromatic photon energies. See also Angle resolved photoemission spectroscopy (ARPES) Photoelectron photoion coincidence spectroscopy (PEPICO) Time-resolved two-photon photoelectron spectroscopy References Emission spectroscopy Surface science Electron spectroscopy Soviet inventions
Ultraviolet photoelectron spectroscopy
[ "Physics", "Chemistry", "Materials_science" ]
932
[ "Spectrum (physical sciences)", "Electron spectroscopy", "Emission spectroscopy", "Surface science", "Condensed matter physics", "Spectroscopy" ]
8,696,574
https://en.wikipedia.org/wiki/Aqueous%20normal-phase%20chromatography
Aqueous normal-phase chromatography (ANP) is a chromatographic technique that involves the mobile phase compositions and polarities between reversed-phase chromatography (RP) and normal-phase chromatography (NP), while the stationary phases are polar. Principle In normal-phase chromatography, the stationary phase is polar and the mobile phase is nonpolar. In reversed phase the opposite is true; the stationary phase is nonpolar and the mobile phase is polar. Typical stationary phases for normal-phase chromatography are silica or organic moieties with cyano and amino functional groups. For reversed phase, alkyl hydrocarbons are the preferred stationary phase; octadecyl (C18) is the most common stationary phase, but octyl (C8) and butyl (C4) are also used in some applications. The designations for the reversed phase materials refer to the length of the hydrocarbon chain. In normal-phase chromatography, the least polar compounds elute first and the most polar compounds elute last. The mobile phase consists of a nonpolar solvent such as hexane or heptane mixed with a slightly more polar solvent such as isopropanol, ethyl acetate or chloroform. Retention decreases as the amount of polar solvent in the mobile phase increases. In reversed phase chromatography, the most polar compounds elute first with the more nonpolar compounds eluting later. The mobile phase is generally a mixture of water and miscible polarity-modifying organic solvent, such as methanol, acetonitrile or THF. Retention increases as the fraction of the polar solvent (water) in the mobile phase is higher. Normal phase chromatography retains molecules via an adsorptive mechanism, and is used for the analysis of solutes readily soluble in organic solvents. Separation is achieved based on the polarity differences among functional groups such as amines, acids, metal complexes, etc. as well as their steric properties, while in reversed-phase chromatography, a partition mechanism typically occurs for the separation by non-polar differences. In the aqueous normal-phase chromatography the support is based on a silica with "hydride surface" which is distinguishable from the other silica support materials, used either in normal phase, reversed phase, or hydrophilic interaction chromatography. Most silica materials used for chromatography have a surface composed primarily of silanols (-Si-OH). In a "hydride surface" the terminal groups are primarily -Si-H. The hydride surface can also be functionalized with carboxylic acids and long-chain alkyl groups. Mobile phases for ANPC are based on organic solvents as bulk solvents (such as methanol or acetonitrile) with a small amount of water as a modifier of polarity; thus, the mobile phase is both "aqueous" (water is present) and "normal phase type" (less polar than the stationary phase). Thus, polar solutes (such as acids and amines) are more strongly retained, with the ability to affect the retention, which decreases as the amount of water in the mobile phase increases. Typically the mobile phases are rich with organic solvents, with amount of the nonpolar solvent in the mobile phase at least 60% or greater to reach minimal required retention. A true ANP stationary phase will be able to function in both the reversed phase and normal phase modes with only the amount of water in the eluent varying. Thus a continuum of solvents can be used from 100% aqueous to pure organic. ANP retention has been demonstrated for a variety of polar compounds on the hydride based stationary phases. Recent investigations have demonstrated that silica hydride materials have a very thin water layer (about 0.5 monolayer) in comparison to HILIC phases that can have from 6–8 monolayers. In addition the substantial negative charge on the surface of hydride phases is the result of hydroxide ion adsorption from the solvent rather than silanols. Features An interesting feature of these phases is that both polar and nonpolar compounds can be retained over some range of mobile phase composition (organic/aqueous). The retention mechanism of polar compounds has recently been shown to be the result of the formation of a hydroxide layer on the surface of the silica hydride. Thus positively charged analytes are attracted to the negatively charged surface and other polar analytes are likely to be retained through displacement of hydroxide or other charged species on the surface. This property distinguishes it from a pure HILIC (hydrophilic interaction chromatography) columns where separation by polar differences is obtained through partitioning into a water-rich layer on the surface, or a pure RP stationary phase on which separation by nonpolar differences in solutes is obtained with very limited secondary mechanisms operating. Another important feature of the hydride-based phases is that for many analyses it is usually not necessary to use a high pH mobile phase to analyze polar compounds such as bases. The aqueous component of the mobile phase usually contains from 0.1 to 0.5% formic or acetic acid, which is compatible with detector techniques that include mass spectral analysis. References C. Kulsing, Y. Nolvachai, P.J. Marriott, R.I. Boysen, M.T. Matyska, J.J. Pesek, M.T.W. Hearn, J. Phys. Chem B, 119 (2015) 3063-3069. J. Soukup, P. Janas, P. Jandera, J. Chromatogr. A, 1286 (2013) 111-118 Chromatography
Aqueous normal-phase chromatography
[ "Chemistry" ]
1,235
[ "Chromatography", "Separation processes" ]
8,696,590
https://en.wikipedia.org/wiki/CCL23
Chemokine (C-C motif) ligand 23 (CCL23) is a small cytokine belonging to the CC chemokine family that is also known as Macrophage inflammatory protein 3 (MIP-3) and Myeloid progenitor inhibitory factor 1 (MPIF-1). CCL23 is predominantly expressed in lung and liver tissue, but is also found in bone marrow and placenta. It is also expressed in some cell lines of myeloid origin. CCL23 is highly chemotactic for resting T cells and monocytes and slightly chemotactic for neutrophils. It has also been attributed to an inhibitory activity on hematopoietic progenitor cells. The gene for CCL23 is located on human chromosome 17 in a locus containing several other CC chemokines. CCL23 is a ligand for the chemokine receptor CCR1. References Cytokines
CCL23
[ "Chemistry" ]
199
[ "Cytokines", "Signal transduction" ]
8,696,928
https://en.wikipedia.org/wiki/Sky%20and%20Water%20I
Sky and Water I is a woodcut print by the Dutch artist M. C. Escher first printed in June 1938. The basis of this print is a regular division of the plane consisting of birds and fish. Both prints have the horizontal series of these elements—fitting into each other like the pieces of a jigsaw puzzle—in the middle, transitional portion of the prints. In this central layer the pictorial elements are equal: birds and fish are alternately foreground or background, depending on whether the eye concentrates on light or dark elements. The birds take on an increasing three-dimensionality in the upward direction, and the fish, in the downward direction. But as the fish progress upward and the birds downward they gradually lose their shapes to become a uniform background of sky and water, respectively. According to Escher: "In the horizontal center strip there are birds and fish equivalent to each other. We associate flying with sky, and so for each of the black birds the sky in which it is flying is formed by the four white fish which encircle it. Similarly swimming makes us think of water, and therefore the four black birds that surround a fish become the water in which it swims." This print has been used in physics, geology, chemistry, and in psychology for the study of visual perception. In the pictures a number of visual elements unite into a simple visual representation, but separately each forms a point of departure for the elucidation of a theory in one of these disciplines. See also Sky and Water II Tessellation Sources M. C. Escher—The Graphic Work; Benedikt-Taschen Publishers. M. C. Escher—29 Master Prints; Harry N. Abrams, Inc., Publishers. Locher, J. L. (2000). The Magic of M. C. Escher. Harry N. Abrams, Inc. . Works by M. C. Escher 1938 works Woodcuts Fish in art Birds in art Optical illusions he:שמים ומים#שמים ומים 1
Sky and Water I
[ "Physics" ]
427
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
8,697,057
https://en.wikipedia.org/wiki/CCL24
Chemokine (C-C motif) ligand 24 (CCL24) also known as myeloid progenitor inhibitory factor 2 (MPIF-2) or eosinophil chemotactic protein 2 (eotaxin-2) is a protein that in humans is encoded by the CCL24 gene. This gene is located on human chromosome 7. Function CCL24 is a small cytokine belonging to the CC chemokine family. CCL24 interacts with chemokine receptor CCR3 to induce chemotaxis in eosinophils. This chemokine is also strongly chemotactic for resting T lymphocytes and slightly chemotactic for neutrophils. Clinical significance Elevated levels of eotaxin-2 has been seen in patients with aspirin-exacerbated respiratory disease (AERD), such as asthma. People with lower plasma levels of eotaxin-2 have not been showing tendency to develop aspirin inducible asthma. References Cytokines
CCL24
[ "Chemistry" ]
221
[ "Cytokines", "Signal transduction" ]
8,697,690
https://en.wikipedia.org/wiki/RFIQin
RFIQin, also referred to as RFIQ, is a patented automatic cooking device that consists of three different sized pans, a portable induction heater, and recipe cards, which is designed by Vita Craft Corporation, but is currently only sold in Japan through Vita Craft Japan. Electronics are embedded in the cookware, which monitor the food and send wireless signals to adjust the temperature of the induction heater accordingly; this prevents the loss of nutrients and saves thermal energy because the food is not overheated. Specialized recipe cards transmit a wireless signal to the RFIQin system when a recipe card is waved under the handle of the pan, initiating the cooking process. Each recipe card can include 23 distinct recipe steps. The recipe cards have cooking steps to follow that are indicated by a beeping sound from the induction heater. The system can cook almost all types of food, including cakes and fried foods. Each pan is embedded with a RFID tag in the handle of cookware, which is covered by a special pan tag that protects the RFID tag from heat and moisture. A temperature sensor connected to the RFID tag is imbedded within a tunnel in the bottom center of the pan. The RFID tag monitors the food 16 times per second and transmits a proprietary signal to the |induction heater regarding the heating characteristics of the contents as well as the temperature of the contents to adjust the heat accordingly; the special pan tag is not battery powered and does not need to be recharged. During the cooking process the food does not need to be monitored or stirred because the pans use waterless cooking methods and the induction heater uses alternating pulses to control the heat, so the liquid in the pans continually revolves in a circular motion. The RFIQin pans are built with a vapor seal that enables the pans to use techniques of pressure cooking. The portable induction heater and cookware can be used in a manual mode as a regular induction heater. Popular Science reported RFIQin will cost $600; however, in May 2006, The Sacramento Bee reported that RFIQin costs $2,100 (241,500 yen) in Japan. See also Induction cooking Pressure cooking References External links Vita Craft Corporation: RFIQ , description of RFIQ Vita Craft Japan: RFIQ TV-Tokyo: Video of RFIQin (RealPlayer required) Automatic identification and data capture Cooking appliance brands Radio-frequency identification
RFIQin
[ "Technology", "Engineering" ]
494
[ "Radio-frequency identification", "Radio electronics", "Data", "Automatic identification and data capture" ]
8,699,703
https://en.wikipedia.org/wiki/CCL26
Chemokine (C-C motif) ligand 26 (CCL26) is a small cytokine belonging to the CC chemokine family that is also called Eotaxin-3, Macrophage inflammatory protein 4-alpha (MIP-4-alpha), Thymic stroma chemokine-1 (TSC-1), and IMAC. It is expressed by several tissues including heart, lung and ovary, and in endothelial cells that have been stimulated with the cytokine interleukin 4. CCL26 is chemotactic for eosinophils and basophils and elicits its effects by binding to the cell surface chemokine receptor CCR3. This gene for chemokine is located on human chromosome 7. References External links Cytokines
CCL26
[ "Chemistry" ]
177
[ "Cytokines", "Signal transduction" ]
8,699,846
https://en.wikipedia.org/wiki/Gold%E2%80%93aluminium%20intermetallic
Gold–aluminium intermetallic is a type of intermetallic compound of gold and aluminium that usually forms at contacts between the two metals. Gold–aluminium intermetallic have different properties from the individual metals, such as low conductivity and high melting point depending on their composition. Due to the difference of density between the metals and intermetallics, the growth of the intermetallic layers causes reduction in volume, and therefore creates gaps in the metal near the interface between gold and aluminium. The production of gaps lowers the strength of the metal compound, which can cause mechanical failure at the joint, fostering the problems that the intermetallics causes in metal compounds. In microelectronics, these properties can cause problems in wire bonding. The main compounds formed are usually Au5Al2 (white plague) and AuAl2 (purple plague), both of which form at high temperatures, then Au5Al2 and AuAl2 can further react with Au to form more stable compound, Au2Al. Properties Au5Al2 has low electrical conductivity and relatively low melting point. Au5Al2's formation at the joint causes increase of electrical resistance, which can lead to electrical failure. Au5Al2 typically forms at 95% of Au and 5% of Al by mass, its melting point is about 575 °C, which is the lowest among the major gold-aluminum intermetallic compounds. AuAl2 is a brittle bright-purple compound, with a composition of about 78.5% Au and 21.5% Al by mass. AuAl2 is the most thermally stable species of the Au–Al intermetallic compounds, with a melting point of 1060 °C (see phase diagram), which is similar to the melting point of pure gold. AuAl2 can react with Au, therefore is often replaced by Au2Al, a tan-colored substance, which forms at composition of 93% of Au and 7% of Al by mass. It is also a poor conductor and can cause electrical failure of the joint, which can further lead to mechanical failure. Voiding At lower temperatures, about 400–450 °C, an interdiffusion process takes place at the junction, leading to formation of layers of different gold-aluminum intermetallic compounds with different growth rates. Gaps are formed as the denser and faster-growing layers consume the slower-growing layers. This process is known as the Kirkendall voiding, which leads to both increased electrical resistance and mechanical weakening of the wire bond. When the voids forms along the diffusion front, this process is aided by contaminants present in the lattice, and is known as the Horsting voiding, which is a similar process to the Kirkendall voiding. See also Colored gold Tin whiskers References External links Harvard: Gold Aluminium Intermetallics Aluminium aurate – purple gold Corrosion Gold Aluminides Integrated circuits Intermetallics
Gold–aluminium intermetallic
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
608
[ "Inorganic compounds", "Computer engineering", "Metallurgy", "Corrosion", "Electrochemistry", "Intermetallics", "Condensed matter physics", "Alloys", "Materials degradation", "Aluminides", "Integrated circuits" ]
8,700,260
https://en.wikipedia.org/wiki/Antiknock%20agent
An antiknock agent is a gasoline additive used to reduce engine knocking and increase the fuel's octane rating by raising the temperature and pressure at which auto-ignition occurs. The mixture known as gasoline or petrol, when used in high compression internal combustion engines, has a tendency to knock (also called "pinging" or "pinking") and/or to ignite early before the correctly timed spark occurs (pre-ignition, refer to engine knocking). Notable early antiknock agents, especially tetraethyllead, added to gasoline included large amounts of toxic lead. The chemical was responsible for global negative impacts on health, and the phase out of leaded gasoline from the 1970s onward was reported by the United Nations Environmental Programme to be responsible for "$2.4 trillion in annual benefits, 1.2 million fewer premature deaths, higher overall intelligence and 58 million fewer crimes." Some other chemicals used as gasoline additives are thought to be less toxic. Research Early research was led by A. H. Gibson and Harry Ricardo in England and Thomas Midgley Jr. and Thomas Boyd in the United States. The discovery that lead additives modified this behavior led to the widespread adoption of the practice in the 1920s and therefore more powerful higher compression engines. The most popular additive was tetraethyllead. However, with the discovery of the environmental and health damage caused by the lead, attributed to Derek Bryce-Smith and Clair Cameron Patterson, and the incompatibility of lead with catalytic converters found on virtually all US automobiles since 1975, this practice began to wane in the 1980s. Most countries are phasing out leaded fuel although different additives still contain lead compounds. Other additives include aromatic hydrocarbons, ethers and alcohol (usually ethanol or methanol). Typical agents Typical agents that have been used for their antiknock properties are: Tetraethyllead (still in use as a high octane additive) MTBE Ethanol Methylcyclopentadienyl manganese tricarbonyl (MMT) Ferrocene Iron pentacarbonyl Toluene Isooctane BTEX - a hydrocarbon mixture of benzene, toluene, xylene and ethyl-benzene, also called gasoline aromatics Xylidine - any of a number of isomeric amines of xylene. Tetraethyllead In the U.S., where tetraethyllead had been blended with gasoline (primarily to boost octane levels) since the early 1920s, standards to phase out leaded gasoline were first implemented in 1973. In 1995, leaded fuel accounted for only 0.6% of total gasoline sales and less than 2,000 tons of lead per year. From January 1, 1996, the Clean Air Act banned the sale of leaded fuel for use in on-road vehicles in the United States. Possession and use of leaded gasoline in a regular on-road vehicle now carries a maximum US$10,000 fine in the United States. However, fuel containing lead may continue to be sold for off-road uses, including aircraft, racing cars, farm equipment, and marine engines. The ban on leaded gasoline led to thousands fewer tons of lead being released into the air by automobiles. Similar bans in other countries have resulted in sharply decreasing levels of lead in people's bloodstreams. A side effect of the lead additives was protection of the valve seats from erosion. Many classic cars' engines have needed modification to use lead-free fuels since leaded fuels became unavailable. However, "lead substitute" products are also produced and can sometimes be found at auto parts stores. Gasoline, as delivered at the pump, also contains additives to reduce internal engine carbon buildups, improve combustion, and to allow easier starting in cold climates. In some parts of South America, Asia, and the Middle East, leaded gasoline is still in use. Leaded gasoline was phased out in sub-Saharan Africa, starting 1 January 2006. A growing number of countries have drawn up plans to ban leaded gasoline in the near future. Some experts speculate that leaded petrol was behind a global crime wave in the late 1980s and early 1990s. To avoid deposits of lead inside the engine, lead scavengers are added to the gasoline together with tetraethyllead. The most common ones are: Tricresyl phosphate 1,2-Dibromoethane 1,2-Dichloroethane MTBE As tetraethyllead use declined, industry had to decide how to make up the octane deficit between the principal marketable light fuels produced by their refineries, and the higher octane fuels needed for high-compression gasoline engines in the automobile fleet. Around 70% of the difference was accommodated by more advanced processes at the refinery stage, cracking other hydrocarbon products from the distillation stack to modify them into fuels that would blend gasoline closer the appropriate octane. Most of the rest of the octane deficit required chemical additives not derived from the refinery process. Tetraethyl lead was largely replaced in the US with methyl tert-butyl ether starting in 1979. MTBE is a toxic water pollutant, and a series of groundwater contamination scandals starting in the 90's prompted the EPA to begin phasing MTBE out in 2000. Ethanol MTBE's water pollution issues prompted plans for a phaseout, starting in 2000 with an EPA draft proposal, which was addressed several times at the state level in the years to follow, and eventually cemented in place federally with a 9-year phaseout in 2005's Energy Policy Act, with significant proportions of fuel ethanol designated as the replacement antiknock agent for the US automotive fuel system. Congress' attempts to promote ethanol for its geopolitical use as a backstop on any attempts to limit the US' gasoline supply, and also its incentives to reward Iowan corn farmers, whose state political primaries hold a special place in the electoral system, escalated ethanol from an additive to be used as needed, then to a fixed blending proportion of 5%, and then 10%, which is today the most common US fuel blend. Ethanol has several issues as an antiknock additive. It is hydrophilic, pulling water vapor out of moist air, and it also increases the level of free oxygen in the fuel significantly. Both of these cause significant degradation to traditionally constructed engines, posing both residue and corrosion issues in increasing proportion with increasing fractions of ethanol. Whereas age-degraded gasoline may simply polymerize, evaporate, and thus lose its flammability, age-degraded gasoline-ethanol blends can cause severe damage if allowed to sit in an engine. Automotive engines addressed this with the mandated shift over to ethanol-tolerant metals and seals, and with the use of smart electronic fuel injection, which has some flexibility to adjust combustion properties and timing. Automotive engines did not see major issues because of these factors, and because automobiles in active use typically cycle through their gas tank in a matter of weeks. In small carburetor engines, like generators and lawnmowers, ethanol damage became the dominant mode of failure. MMT Methylcyclopentadienyl manganese tricarbonyl (MMT) has been used for many years in Canada and recently in Australia to boost octane ratings. It also allows old cars, designed to use leaded fuel, to run on unleaded fuel without the need for additives to prevent valve stem erosion. A large Canadian study from 2002 (funded by automakers, who are against its use) concluded that MMT impairs the effectiveness of automobile emission controls and increases pollution from motor vehicles. However, a later study by the Canadian government found that "no Notice of Defect was found to be potentially caused by MMT." Many studies have been undertaken over time that confirmed the use of MMT is compatible with vehicles and safe for human health and the environment. In particular, a 2013 risk assessment on MMT was undertaken by ARCADIS Consulting, following a methodology developed by the European Commission. This risk assessment was verified by an independent panel and found by the EU Commission to be compliant with their methodology. It concluded that "when MMT is used as a fuel additive in petrol, no significant human health or environmental concerns related to exposure to either MMT or its transformation [combustion] products (manganese phosphate, manganese sulphate and manganese tetroxide) were identified even in locations where MMT is approved for use at levels up to 18 mg Mn/L." As stated by Health Canada in their risk assessment on the widespread use of MMT in Canadian gasoline, "all analyses indicate that the combustion products of MMT in gasoline do not represent an added health risk to the Canadian population" MMT is manufactured by reduction of bis(methylcyclopentadienyl) manganese using triethylaluminium. The reduction is conducted under an atmosphere of carbon monoxide. MMT is a so-called half-sandwich compound, or more specifically a piano-stool complex (since the three CO ligands are like the legs of a piano stool). The manganese atom in MMT is coordinated with three carbonyl groups as well as to the methylcyclopentadienyl ring. These hydrophobic organic ligands make MMT highly lipophilic, which may increase bioaccumulation. While the structure of MMT suggests lipophilicity and potential to bioaccumulate, comparison of bioconcentration factors (BCF) reported for plant and animal species in comparison to regulatory-based cutoffs (i.e., US EPA and EU REACH) indicates a low bioaccumulative potential of MMT. Figures 2 and 3 of the study (pages 182 & 184) shows the BCF plotted against time and illustrates the potential BCF of MMT. From these figures, the upper curve (A) demonstrates the 9-day MMT BCF plateauing at approximately 400 in plants and 200 in fish, with both values well below the Bioaccumulative / Very Bioaccumulative (B/vB) thresholds of US EPA, EU REACH and Environment & Climate Change Canada. A variety of related complexes are known, including ferrocene, which is also under consideration as an additive to gasoline. Ferrocene Ferrocene is the organometallic compound with the formula Fe(C5H5)2. It is the prototypical metallocene, a type of organometallic chemical compound consisting of two cyclopentadienyl rings bound on opposite sides of a central metal atom. Such organometallic compounds are also known as sandwich compounds. The rapid growth of organometallic chemistry is often attributed to the excitement arising from the discovery of ferrocene and its many analogues. Ferrocene and its numerous derivatives have no large-scale applications, but have many niche uses that exploit their unusual structure (ligand scaffolds, pharmaceutical candidates), robustness (anti-knock formulations, precursors to materials), and redox reactions (reagents and redox standards). Use for global cooling has been proposed. Ferrocene and its derivatives are antiknock agents added to the petrol used in motor vehicles, and are safer than the now-banned tetraethyllead. Petrol additive solutions containing ferrocene can be added to unleaded petrol to enable its use in vintage cars designed to run on leaded petrol. The iron-containing deposits formed from ferrocene can form a conductive coating on the spark plug surfaces. Iron pentacarbonyl Iron pentacarbonyl, also known as iron carbonyl, is the compound with formula . Under standard conditions Fe(CO)5 is a free-flowing, straw-colored liquid with a pungent odour. This compound is a common precursor to diverse iron compounds, including many that are useful in organic synthesis. Fe(CO)5 is prepared by the reaction of fine iron particles with carbon monoxide. Fe(CO)5 is inexpensively purchased. Iron pentacarbonyl is one of the homoleptic metal carbonyls; i.e. metal complexes bonded only to CO ligands. Other examples include octahedral Cr(CO)6 and tetrahedral Ni(CO)4. Most metal carbonyls have 18 valence electrons, and Fe(CO)5 fits this pattern with 8 valence electrons on Fe and five pairs of electrons provided by the CO ligands. Reflecting its symmetrical structure and charge neutrality, Fe(CO)5 is volatile; it is one of the most frequently encountered liquid metal complexes. Fe(CO)5 adopts a trigonal bipyramidal structure with the Fe atom surrounded by five CO ligands: three in equatorial positions and two axially bound. The Fe-C-O linkages are each linear. Fe(CO)5 is the archetypal fluxional molecule due to the rapid interchange of the axial and equatorial CO groups via the Berry mechanism on the NMR timescale. Consequently, the13C NMR spectrum exhibits only one signal due to the rapid interchange between nonequivalent CO sites. In Europe, iron pentacarbonyl was once used as an anti-knock agent in petrol in place of tetraethyllead. Two more modern alternative fuel additives are ferrocene and methylcyclopentadienyl manganese tricarbonyl. Fe(CO)5 is used in the production of "carbonyl iron", a finely divided form of iron used in magnetic cores of high-frequency coils for electronics, and for manufacture of the active ingredients of some radar absorbent materials (e.g. iron ball paint). It is famous as a chemical precursor for the synthesis of various iron-based nanoparticles. Iron pentacarbonyl has been found to be a strong flame speed inhibitor in oxygen based flames. Toluene Toluene is a clear, water-insoluble liquid with the typical smell of paint thinners, redolent of the sweet smell of the related compound benzene. It is an aromatic hydrocarbon that is widely used as an industrial feedstock and as a solvent. Like other solvents, toluene is also used as an inhalant drug for its intoxicating properties. Toluene and benzene were used as octane rating boosters for aviation fuel by the Royal Air Force in the World War Two. Tetraethyl lead was manufactured in the USA and was on short supply, so Rolls-Royce engineers built the Rolls-Royce Merlin to work with fuel affed with benzene and toluene. This was called as "aromatic fuel". The Allison V-1710 engine would not run with the RAF fuels as it required tetraethyl lead for lubrication of its valvetrain, but the Packard-built Merlins would. This is why the Merlin-engine P-51 Mustangs had a text "Suitable for Aromatics" on their USAAF type description. Toluene can be used as an octane booster in gasoline fuels used in internal combustion engines. Toluene at 86% by volume fueled all the turbo Formula 1 teams in the 1980s, first pioneered by the Honda team. The remaining 14% was a "filler" of n-heptane, to reduce the octane to meet Formula 1 fuel restrictions. Toluene at 100% can be used as a fuel for both two-stroke and four-stroke engines; however, due to the density of the fuel and other factors, the fuel does not vaporize easily unless preheated to 70 degrees Celsius (Honda accomplished this in their Formula 1 cars by routing the fuel lines through the exhaust system to heat the fuel). Toluene also poses similar problems as alcohol fuels, as it eats through standard rubber fuel lines and has no lubricating properties as standard gasoline does, which can break down fuel pumps and cause upper cylinder bore wear. Toluene has also been used as a coolant for its good heat transfer capabilities in sodium cold traps used in nuclear reactor system loops. Properties of xylenes and ethylbenzene are nearly identical to toluene, with the latter advertised by a refinery as "component of high performance fuels". 2,2,4-Trimethylpentane (isooctane) 2,2,4-Trimethylpentane, also known as isooctane, is an octane isomer which defines the 100 point on the octane rating scale (the zero point is n-heptane). It is an important component of gasoline. Isooctane is produced on a massive scale in the petroleum industry, usually as a mixture with related hydrocarbons. The alkylation process alkylates isobutane with isobutylene using a strong acid catalyst. In the NExOCTANE process, isobutylene is dimerized into isooctene and then hydrogenated to isooctane. Xylidine In World War II, xylidine was an important antiknock agent in very high performance aviation gasolines. Its purpose was to permit high levels of boost pressure in multiple-stage turbochargers, and thus high power at high altitudes, without causing detonation that would destroy the engine. The high pressures brought high temperatures of inlet air, making engines prone to knock. This use and storage stabilization methods were important military secrets. See also MTBE controversy References External links Engine technology
Antiknock agent
[ "Technology" ]
3,613
[ "Engine technology", "Engines" ]
8,700,402
https://en.wikipedia.org/wiki/NATO%20CRONOS
Crisis Response Operations in NATO Operating Systems (CRONOS) is a system of interconnected computer networks used by NATO to transmit classified information. It provides NATO Secret level operations, with access to NATO intelligence applications and databases. As of 1999, a wide area network of NT computers used in NATO in Europe. CRONOS provides e-mail, the Microsoft Office Suite, etc. It provides informal messaging (e-mail) and information sharing within the NATO community. There is no connectivity between CRONOS and any US network or with the coalition wide area network. See also SIPRNet – U.S. Secret Internet Protocol Router Network RIPR – U.S. / Korea Coalition Network UK Networks Joint Operational Command System (JOCS) Defence Information Infrastructure Foreign and Commonwealth Office's (FCO) FIRECREST References External links Newsletter for Information Assurance Technology Professionals, Spring 1999 NATO Wide area networks Military communications
NATO CRONOS
[ "Technology", "Engineering" ]
184
[ "Computing stubs", "Military communications", "Telecommunications engineering", "Computer network stubs" ]
8,700,475
https://en.wikipedia.org/wiki/Tricresyl%20phosphate
Tricresyl phosphate (TCP), is a mixture of three isomeric organophosphate compounds most notably used as a flame retardant. Other uses include as a plasticizer in manufacturing for lacquers and varnishes and vinyl plastics and as an antiwear additive in lubricants. Pure tricresyl phosphate is a colourless, viscous liquid, although commercial samples are typically yellow. It is virtually insoluble in water, but easily soluble in organic solvents like toluene, hexane, and diethyl ether among others. It was synthesized by Alexander Williamson in 1854 upon reacting phosphorus pentachloride with cresol (a mixture of para-, ortho-, and meta- isomers of methylphenol), though today's manufacturers can prepare TCP by mixing cresol with phosphorus oxychloride or phosphoric acid as well. TCP, especially the all-ortho isomer, is the causative agent in a number of acute poisonings. Its chronic toxicity is also of concern. The ortho-isomer is rarely used on its own outside of laboratory studies that require isomeric purity, due to its extremely toxic nature, and is generally excluded from commercial products where TCP is involved. Isomers The most dangerous isomers are considered to be those containing ortho isomers, such as tri-ortho-cresyl phosphate, TOCP. The World Health Organization stated in 1990 that "Because of considerable variation among individuals in sensitivity to TOCP, it is not possible to establish a safe level of exposure" and "TOCP are therefore considered major hazards to human health." Therefore, strenuous efforts have been made to reduce the content of the ortho isomers in commercial TCP if there is a risk of human exposure. However, researchers at the University of Washington found that non-ortho TCP isomers present in synthetic jet engine oils do inhibit certain enzymes. Health calamities from TCP TCP was the source of a 1977 epidemic of acute polyneuropathy in Sri Lanka where 20 Tamil girls were poisoned by TCP-contaminated gingili oil. It is a toxic substance that causes neuropathy, paralysis in the hands and feet, and/or death for humans and animals alike. It can be ingested, inhaled, or even absorbed through the skin. Its ortho-isomer is notoriously known as a source of several delayed neurotoxic outbreaks across recent history. Contemporary commercial products typically contain only the para- and meta- isomers of TCP due to the lack of neurotoxic potential within these isomers. The earliest known mass poisoning event by TOCP occurred in 1899 when six French hospital patients were given a phosphocresote cough mixture containing the organophosphate compound. Pharmacist Jules Brissonet had synthesized this compound in the hopes of treating tuberculosis, but soon after administration all six patients developed polyneuropathy. The original paper described this phosphocresote to be:A bland, limpid liquid, nearly tasteless and odourless, which is not irritating to the gastric mucous membranes. When creosote is combined with phosphoric acid the metabolic action produced is much more marked, and Phosote can be tolerated in larger doses and for a longer continuance than Creosote or Guaiacol. Dose of the preparation, one to two grammes three times a day. The greatest mass poisoning by TOCP occurred in 1930 when it was added as an adulterant to the popular drink Jamaica ginger, also known as Ginger Jake, during the United States Prohibition era, when all alcoholic drinks had been outlawed by the Eighteenth Amendment to the United States Constitution. Jake was listed as a cure for "assorted ailments" in the U.S. Pharmocopoeia and thus easy to obtain; as it had a high alcohol content it was used as a way to obtain alcohol legally. Up to 100,000 people were poisoned and 5,000 paralyzed when a manufacturer of Ginger Jake added Lindol—a compound that consisted mainly of TOCP—to their product. The exact reason for why TOCP was found in Ginger Jake is disputed; one source claims it was to further extract the Jamaica root, another source claims it was to water the drink down, and yet another source claims it was a result of contamination from lubricating oils. Binges of Ginger Jake resulted in what was known to be a "Jake walk", in which patients experienced a highly irregular gait caused by numbness in the legs, followed by eventual paralysis of the wrists and feet. In medical journals it was described to have produced an organophosphate-induced delayed neuropathy (OPIDN) neurodegenerative syndrome, "characterized by distal axonal lesions, ataxia, and neuronal degeneration in the spinal cord and peripheral nervous systems." In 1932, 60 European women experienced TOCP poisoning due to the abortion-inducing (abortifacient) drug apiol. This drug, formed by the phenylpropanoid compound extracted from parsley leaves, was exploited throughout history—and even known to Hippocrates—as an agent to terminate pregnancies. The contamination of the modern drug in 1932 was not accidental, but rather included as an "additional stimulus." Those who took the pill experienced comas, convulsions, paralysis of the lower body (paraplegia) and often death. Apiol was subsequently criticized by doctors, journalists, and activists until it was discontinued, citing that the dangers were too great and the number of poisonings was likely higher than accounted for. Other mass poisonings include: In 1937, 60 South Africans were poisoned after using contaminated cooking oil that had been stored in drums that previously stored lubricating oil. In two separate incidents in 1940, 74, and at least 17, men in units of the Swiss army experienced a similar outbreak when their cooking oil was contaminated with machine gun oil. They became known as the "Oil soldiers". In the 1950s, 11 South Africans used water from drums from a paint factory that previously stored TOCP. 10,000 people in Morocco were poisoned in 1959, when they consumed cooking oil contaminated with jet-plane lubricating oil. Hundreds of Germans in the cities of Eckernförde (1941) and Kiel (1945) when torpedo-lubricants were used as a cooking oil replacement, "organised" from the "black market". That was common, for example after World War I, because cooking oil was of natural origin but, in World War II, TOCP was added for thermic purposes. Severe physical and neurological damage was experienced, and the illness was called the . Aerotoxic syndrome TCP is used as an additive in turbine engine oil and can potentially contaminate an airliner cabin via a bleed air "fume event". Aerotoxic syndrome is the name given to the alleged ill effects (with symptoms including memory loss, depression and schizophrenia) caused by exposure to engine chemicals. However, industry-funded studies in the UK discussed in 1999–2000 did not find a link between TCP and long-term health problems. Safety Animals In studies on slow lorises (Nyticebus coucang coucang), numerous chronic effects observed from topical applications. Mammalian placental development were also negatively affected. Metabolism Although TOCP is mainly excreted through urine and feces, it is partially metabolized by the hepatic cytochrome P450 system. Pathways include hydroxylation at one or more methyl groups, dearylation (removal of a o-cresyl group) and conversion of the hydroxymethyl groups to an aldehyde or a carboxylic acid. The first step results in a saligenin cyclic o-tolyl phosphate (SCOTP) intermediate, a neurotoxin. To the right, the first step of TOCP metabolism is depicted by means of chemical structures. This intermediate is able to inhibit neuropathy target esterase (NTE) and results in the classic organophosphate-induced delayed neuropathy (OPIDN). In tandem, TOCP exerts physical damage by causing axonal destruction and myelin disintegration within specialized cells that transmit nerve impulses (neurons). In addition to the formation of SCOTP, the interactions between TOCP and two different human cytochrome P450 complexes (1A2 and 3A4) can further produce 2-(ortho-cresyl)-4H-1,2,3-benzodioxaphosphoran-2-one (CBDP). This metabolite can bind to butyrylcholinesterase (BuChE) and/or acetylcholinesterase (AChE). Binding to BuChE results in no adverse effects, for its typical role is to covalently bind to organophosphate poisons and detoxify them by inactivation. The dangers in metabolizing TOCP to CBDP occur when its potential to bind to AChE become imminent, for inactivation of the enzyme in nerve synapses can be lethal. The enzyme plays a tantamount role in terminating nerve impulse transmission "by hydrolyzing the neurotransmitter acetylcholine." Upon inactivation, acetylcholine can no longer be broken down in the body and results in uncontrollable muscle spasms, paralyzed breathing (bradycardia), convulsions, and/or death. Luckily, TOCP is considered a weak AChE inhibitor. Onset and treatment In humans, the first symptoms are weakness/paralysis of the hands and feet on both sides of the body due to damage to the peripheral nervous system (polyneuropathy) and a sensation of pins-and-needles (paresthesia). Onset typically occurs between 3–28 days from initial exposure. If ingested, this can be preceded by gastrointestinal symptoms that include nausea, vomiting, and diarrhea. Rates of metabolism vary by species and by individual; some people developed severe polyneuropathy after ingesting 0.15g of TOCP, whereas others have been reported asymptomatic after 1-2g. Though death is uncommon in acute exposure cases, the result of paralysis can last for months or years due to differences in gender, age, and route of exposure. The cardinal treatment is physical therapy to restore the use of the hands and feet, though it can take up to 4 years to only regain a fraction of motor control. Exposure to TOCP has been characterized by a list of observations: Cholinesterase levels will remain unchanged or show no significant changes. Electromyography will show partial or complete reactions of degeneration. An increase of protein in cerebrospinal fluid. Swelling of the parotid glands (non-tender). References Acetylcholinesterase inhibitors Flame retardants Fuel additives Neurotoxins Organophosphates Phosphate esters Plasticizers Prohibition in the United States Solvents
Tricresyl phosphate
[ "Chemistry" ]
2,328
[ "Neurochemistry", "Neurotoxins" ]
8,700,786
https://en.wikipedia.org/wiki/Organophosphate-induced%20delayed%20neuropathy
Organophosphate-induced delayed neuropathy (OPIDN), also called organophosphate-induced delayed polyneuropathy (OPIDP), is a neuropathy caused by killing of neurons in the central nervous system, especially in the spinal cord, as a result of acute or chronic organophosphate poisoning. A striking example of OPIDN occurred during the 1930s Prohibition Era when thousands of men in the American South and Midwest developed arm and leg weakness and pain after drinking a "medicinal" alcohol substitute. The drink, called "Ginger Jake," contained an adulterated Jamaican ginger extract containing tri-ortho-cresyl phosphate (TOCP) which resulted in partially reversible neurologic damage. The damage resulted in the limping called "jake paralysis" – and also "jake leg" or "jake walk", which were terms frequently used in the blues music of the period. Europe and Morocco both experienced outbreaks of TOCP poisoning from contaminated abortifacients and cooking oil, respectively. The disorder may contribute to the chronic multisymptom illnesses of the Gulf War veterans as well as aerotoxic syndrome (especially tricresyl phosphate poisoning) The exact cause of the syndrome is unknown, although it has been associated with inhibition of patatin-like phospholipase domain-containing protein 6 (PNPLA6, aka neuropathy target esterase). There is no specific treatment, and recovery is usually incomplete, affecting only sensory nervous system, while motor neuropathy persists. See also Aerotoxic syndrome Gulf War syndrome References External links Organophosphates Neurological disorders Toxic effects of substances chiefly nonmedicinal as to source
Organophosphate-induced delayed neuropathy
[ "Environmental_science" ]
356
[ "Toxic effects of substances chiefly nonmedicinal as to source", "Toxicology" ]
8,700,900
https://en.wikipedia.org/wiki/List%20of%20pop-up%20blocking%20software
This is a list of software that can block pop-up ads. Blocking is usually a user-enabled option, and can in many cases allow specified exceptions. Browsers that can block pop-up ads Trident shells AOL Explorer GreenBrowser Internet Explorer Lunascape Maxthon MSN Explorer NeoPlanet Netcaptor Netscape 8 Sleipnir Gecko-based browsers Camino Epiphany Flock Galeon K-Meleon Lunascape Mozilla Application Suite Mozilla Firefox Netscape 7 Netscape 8 SeaMonkey KHTML/WebKit-based browsers Brave Google Chrome iCab Konqueror Lunascape OmniWeb Safari Shiira Presto-based browsers Opera Others Links NetSurf w3m Add-on programs that block pop-up ads References Web browsers Software add-ons Software by type Online advertising Lists of software Internet privacy software Ad blocking software
List of pop-up blocking software
[ "Technology" ]
195
[ "Computing-related lists", "Lists of software", "Software by type" ]
8,701,085
https://en.wikipedia.org/wiki/List%20of%20books%20in%20computational%20geometry
This is a list of books in computational geometry. There are two major, largely nonoverlapping categories: Combinatorial computational geometry, which deals with collections of discrete objects or defined in discrete terms: points, lines, polygons, polytopes, etc., and algorithms of discrete/combinatorial character are used Numerical computational geometry, also known as geometric modeling and computer-aided geometric design (CAGD), which deals with modelling of shapes of real-life objects in terms of curves and surfaces with algebraic representation. Combinatorial computational geometry General-purpose textbooks The book is the first comprehensive monograph on the level of a graduate textbook to systematically cover the fundamental aspects of the emerging discipline of computational geometry. It is written by founders of the field and the first edition covered all major developments in the preceding 10 years. In the aspect of comprehensiveness it was preceded only by the 1984 survey paper, Lee, D, T., Preparata, F. P.: "Computational geometry - a survey". IEEE Trans. on Computers. Vol. 33, No. 12, pp. 1072–1101 (1984). It is focused on two-dimensional problems, but also has digressions into higher dimensions. The initial core of the book was M.I.Shamos' doctoral dissertation, which was suggested to turn into a book by a yet another pioneer in the field, Ronald Graham. The introduction covers the history of the field, basic data structures, and necessary notions from the theory of computation and geometry. The subsequent sections cover geometric searching (point location, range searching), convex hull computation, proximity-related problems (closest points, computation and applications of the Voronoi diagram, Euclidean minimum spanning tree, triangulations, etc.), geometric intersection problems, algorithms for sets of isothetic rectangles The monograph is a rather advanced exposition of problems and approaches in computational geometry focused on the role of hyperplane arrangements, which are shown to constitute a basic underlying combinatorial-geometric structure in certain areas of the field. The primary target audience are active theoretical researchers in the field, rather than application developers. Unlike most of books in computational geometry focused on 2- and 3-dimensional problems (where most applications of computational geometry are), the book aims to treat its subject in the general multi-dimensional setting. The textbook provides an introduction to computation geometry from the point of view of practical applications. Starting with an introduction chapter, each of the 15 remaining ones formulates a real application problem, formulates an underlying geometrical problem, and discusses techniques of computational geometry useful for its solution, with algorithms provided in pseudocode. The book treats mostly 2- and 3-dimensional geometry. The goal of the book is to provide a comprehensive introduction into methods and approached, rather than the cutting edge of the research in the field: the presented algorithms provide transparent and reasonably efficient solutions based on fundamental "building blocks" of computational geometry. The book consists of the following chapters (which provide both solutions for the topic of the title and its applications): "Computational Geometry (Introduction)" "Line Segment Intersection", "Polygon Triangulation", "Linear Programming", "Orthogonal Range Searching", "Point Location", "Voronoi Diagrams", "Arrangements and Duality", "Delaunay Triangulations", "More Geometric Data Structures", "Convex Hulls", "Binary Space Partitions", "Robot Motion Planning", "Quadtrees", "Visibility Graphs", "Simplex Range Searching". This book is an interactive introduction to the fundamental algorithms of computational geometry, formatted as an interactive document viewable using software based on Mathematica. Specialized textbooks and monographs References Numerical computational geometry (geometric modelling, computer-aided geometric design) Monographs Other Conferences Paper collections "Combinatorial and Computational Geometry", eds. Jacob E. Goodman, János Pach, Emo Welzl (MSRI Publications – Volume 52), 2005, . 32 papers, including surveys and research articles on geometric arrangements, polytopes, packing, covering, discrete convexity, geometric algorithms and their computational complexity, and the combinatorial complexity of geometric objects. "Surveys on Discrete and Computational Geometry: Twenty Years Later" ("Contemporary Mathematics" series), American Mathematical Society, 2008, See also List of important publications in mathematics References External links Computational Geometry Pages Computational geometry Computer science books Computational geometry
List of books in computational geometry
[ "Mathematics" ]
912
[ "Computational geometry", "Computational mathematics" ]
8,701,138
https://en.wikipedia.org/wiki/CXCL9
Chemokine (C-X-C motif) ligand 9 (CXCL9) is a small cytokine belonging to the CXC chemokine family that is also known as monokine induced by gamma interferon (MIG). The CXCL9 is one of the chemokine which plays role to induce chemotaxis, promote differentiation and multiplication of leukocytes, and cause tissue extravasation. The CXCL9/CXCR3 receptor regulates immune cell migration, differentiation, and activation. Immune reactivity occurs through recruitment of immune cells, such as cytotoxic lymphocytes (CTLs), natural killer (NK) cells, NKT cells, and macrophages. Th1 polarization also activates the immune cells in response to IFN-γ. Tumor-infiltrating lymphocytes are a key for clinical outcomes and prediction of the response to checkpoint inhibitors. In vivo studies suggest the axis plays a tumorigenic role by increasing tumor proliferation and metastasis. CXCL9 predominantly mediates lymphocytic infiltration to the focal sites and suppresses tumor growth. It is closely related to two other CXC chemokines called CXCL10 and CXCL11, whose genes are located near the gene for CXCL9 on human chromosome 4. CXCL9, CXCL10 and CXCL11 all elicit their chemotactic functions by interacting with the chemokine receptor CXCR3. Biomarkers CXCL9, -10, -11 have proven to be valid biomarkers for the development of heart failure and left ventricular dysfunction, suggesting an underlining pathophysiological relation between levels of these chemokines and the development of adverse cardiac remodeling. This chemokine has also been associated as a biomarker for diagnosing Q fever infections. Interactions CXCL9 has been shown to interact with CXCR3. CXCL9 in immune reactions For immune cell differentiation, some reports show that CXCL9 lead to Th1 polarization through CXCR3. In vivo model by Zohar et al. showed that CXCL9, drove increased transcription of T-bet and RORγ, leading to the polarization of Foxp3− type 1 regulatory (Tr1) cells or T helper 17 (Th17) from naive T cells via STAT1, STAT4, and STAT5 phosphorylation. Several studies have shown that tumor-associated macrophages (TAMs) play modulatory activities in the TME, and the CXCL9/CXCR3 axis impacts TAMs polarization. The TAMs have opposite effects; M1 for anti-tumor activities, and M2 for pro-tumor activities. Oghumu et al. clarified that CXCR3 deficient mice displayed increased IL-4 production and M2 polarization in a murine breast cancer model, and decreased innate and immune cell-mediated anti-tumor responses. For immune cell activation, CXCL9 stimulate immune cells through Th1 polarization and activation. Th1 cells produce IFN-γ, TNF-α, IL-2 and enhance anti-tumor immunity by stimulating CTLs, NK cells and macrophages. The IFN-γ-dependent immune activation loop also promotes CXCL9 release. Immune cells, like Th1, CTLs, NK cells, and NKT cells, show anti-tumor effect against cancer cells through paracrine CXCL9/CXCR3 in tumor models. The autocrine CXCL9/CXCR3 signaling in cancer cells increases cancer cell proliferation, angiogenesis, and metastasis. CXCL9/CXCR3 and the PDL-1/PD-1 The relationship between CXCL9/CXCR3 and the PDL-1/PD-1 is an important area of research. Programmed cell death-1 (PD-1) shows increased expression on T cells at the tumor site compared to T cells present in the peripheral blood, and anti-PD-1 therapy can inhibit “immune escape” and the immune activation. Peng et al. showed that anti-PD-1 could not only enhance T cell-mediated tumor regression but also increase the expression of IFN-γ but not CXCL9 by bone marrow–derived cells. Blockade of the PDL-1/PD-1 axis in T cells may trigger a positive feedback loop at the tumor site through the CXCL9/CXCR3 axis. Also using anti-CTLA4 antibody, this axis was significantly up-regulated in pretreatment melanoma lesions in patients with good clinical response after ipilimumab administration. CXCL9 and melanoma CXCL9 has also been identified as candidate biomarker of adoptive T cell transfer therapy in metastatic melanoma. The role of CXCL9/CXCR3 in TME and immune response - this plays a critical role in immune activation through paracrine signaling, impacting efficacy of cancer treatments. References Further reading External links Cytokines
CXCL9
[ "Chemistry" ]
1,090
[ "Cytokines", "Signal transduction" ]
8,701,191
https://en.wikipedia.org/wiki/Tetrapod%20%28structure%29
A tetrapod is a form of wave-dissipating concrete block used to prevent erosion caused by weather and longshore drift, primarily to enforce coastal structures such as seawalls and breakwaters. Tetrapods are made of concrete, and use a tetrahedral shape to dissipate the force of incoming waves by allowing water to flow around rather than against them, and to reduce displacement by interlocking. டெட்ராபோட் என்பது வானிலை மற்றும் நீண்ட கரையோர சறுக்கலால் ஏற்படும் அரிப்பைத் தடுக்கப் பயன்படும் அலை-சிதறல் கான்கிரீட் தொகுதியின் ஒரு வடிவமாகும், இது முதன்மையாக கடல் சுவர்கள் மற்றும் பிரேக்வாட்டர்கள் போன்ற கடலோர கட்டமைப்புகளை செயல்படுத்த பயன்படுகிறது.  டெட்ராபோட்கள் கான்கிரீட்டால் ஆனவை, மேலும் அவை நான்முக வடிவத்தைப் பயன்படுத்தி உள்வரும் அலைகளின் சக்தியைக் கலைக்கின்றன, அவை தண்ணீரை அவற்றுக்கு எதிராகப் பாய விட சுற்றிப் பாய அனுமதிப்பதன் மூலமும், இடைப்பூட்டு மூலம் இடப்பெயர்ச்சியைக் குறைக்கின்றன. Invention Tetrapods were originally developed in 1950 by Pierre Danel and Paul Anglès d'Auriac of Laboratoire Dauphinois d'Hydraulique (now Artelia) in Grenoble, France, who received a patent for the design. The French invention was named , derived from Greek and , a reference to the tetrahedral shape. Tetrapods were first used at the thermal power station in Roches Noires in Casablanca, Morocco, to protect the sea water intake. Adoption Tetrapods have become popular across the world, particularly in Japan; it is estimated that nearly 50 percent of Japan's coastline has been covered or somehow altered by tetrapods and other forms of concrete. Their proliferation on the island of Okinawa, a popular vacation destination in Japan, has made it difficult for tourists to find unaltered beaches and shoreline, especially in the southern half of the island. Similar designs See also References Further reading Coastal engineering Wave-dissipating concrete blocks Tetrahedra French inventions
Tetrapod (structure)
[ "Engineering" ]
477
[ "Coastal engineering", "Civil engineering" ]
8,701,478
https://en.wikipedia.org/wiki/Schizosaccharomycetes
Schizosaccharomycetes is a class in the kingdom of fungi. It contains the order Schizosaccharomycetales, the fission yeasts. The genus Schizosaccharomyces is a broad and ancient clade within Ascomycota including five known fission yeast: Schizosaccharomyces pombe, Schizosaccharomyces japonicius, Schizosaccharomyces octosporus, and Schizosaccharomyces cryophilus, and Schizosaccharomyces osmophilus. References Yeasts Monotypic fungus classes Taxa described in 1997
Schizosaccharomycetes
[ "Biology" ]
142
[ "Yeasts", "Fungi" ]
8,701,557
https://en.wikipedia.org/wiki/Schizosaccharomycetales
Schizosaccharomycetales is an order in the kingdom of fungi that contains the family Schizosaccharomycetaceae. References Yeasts Ascomycota Ascomycota orders
Schizosaccharomycetales
[ "Biology" ]
47
[ "Yeasts", "Fungi" ]
8,701,652
https://en.wikipedia.org/wiki/Schizosaccharomycetaceae
The Schizosaccharomycetaceae are a family of yeasts in the order Schizosaccharomycetales. References Yeasts Ascomycota Ascomycota families
Schizosaccharomycetaceae
[ "Biology" ]
45
[ "Yeasts", "Fungi" ]
8,701,795
https://en.wikipedia.org/wiki/Schizosaccharomyces
Schizosaccharomyces is a genus of fission yeasts. The most well-studied species is S. pombe. At present five Schizosaccharomyces species have been described (S. pombe, S. japonicus, S. octosporus, S. cryophilus and S. osmophilus). Like the distantly related Saccharomyces cerevisiae, S. pombe is a significant model organism in the study of eukaryotic cell biology. It is particularly useful in evolutionary studies because it is thought to have diverged from the Saccharomyces cerevisiae lineage between 300 million and 1 billion years ago, and thus provides an evolutionarily distant comparison. See also Yeast in winemaking References External links diArk - Schizosaccharomyces Yeasts Ascomycota Yeasts used in brewing
Schizosaccharomyces
[ "Biology" ]
192
[ "Yeasts", "Fungi" ]