id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
4,405,107
https://en.wikipedia.org/wiki/Active%20disassembly
Active Disassembly (AD) is a developing technology which is associated with the term active disassembly using smart materials (ADSM). Outline Smart materials such as shape memory alloys (SMA) are now offering the possibility of allowing complex items to be disassembled easily and in a potentially cost-effective manner. Other smart materials employed by AD include, shape memory polymers (SMP), smart layers, sprays, engineering polymers etc. The development of this technology could make recycling of consumer products more common and thus serve to be environmentally friendly. Eco-design and legislative background Companies designing and manufacturing a range of consumer goods are becoming increasingly subject to legislative and other pressures requiring them to consider the "End of Life" (EoL) implications of their products. The ELV (End of Life Vehicle) Directive in Europe, for example, states that the current reuse and recycling level of 75% (by weight) has to be raised to 85% by 2015. The WEEE (Waste Electrical and Electronic Equipment) Directive is aimed at the eradication of landfill as a means of disposing of hazardous materials such as arsenic in LEDs. Manufacturers are also required to build strategies for disassembly into the design of their products. In the past designing products such as cars rarely involved consideration of what would happen when they were scrapped, although some companies, such as BMW have been pro-active in this respect. Research Dr. Chiodo is the inventor of AD and ADSM technology. He focused his research on thermally triggered disassembly using shape memory materials. His work first started in recycling related design solutions since the late 1980s. In 1991, his MA thesis investigated Design for Disassembly providing incentive for a new automated approach to what was at the time, a cumbersome endeavour. He conducted experiments including crude force methods to highly tuned approaches such as temperature, electrical resistance, vibration, volume, explosive, chemical, induction and bio triggered disassembly techniques. Since then, this work has expanded to a variety of dematerialization technology including expanded triggering mechanisms, varied hierarchical control parameters, increased temperature allowances among other considerations including the aforementioned. Dr. Chiodo invented hundreds of AD, ADSM and other automated technology mechanisms since his initial inventions in 1996. His recent work includes specific component isolation and clean segregation of specific elements for re-use including LCDs. In 1996, he conducted disassembly and shape memory experiments using typical engineering polymers such as PEEK, ABS, PC, Nylon and others; manipulating their shape memory properties for potentially more cost-effective active disassembly alternatives. This work has been re-addressed by H. Hussein, Dr. Mark Allen and Dr. David Harrison in a paper published in 2009 with results from collaborative work between Dr. Chiodo, Motorola, Nokia, Sony, Gaiker, Indumetal, IKP etc. but has so far produced only pre-competitive results. From 1996, this field has gained an increased popularity by industry which has led to more extensive research. Dr. Nick Jones has conducted work on ELVs among a variety of other novel approaches to AD using electrically triggered SMA mechanisms. Dr. Jones and Dr. Chiodo have recently developed a SMA NiTi releasing mechanism for LCD panels. These are for clean and non-destructive dismantling of macro assemblies' of desktop and laptop displays. It consists of an automated electrically triggered fine wire that lies dormant until triggered at EoL. Dr. Jones has developed a group of applications for the ELV market. These include SMA devices for airbags, SMP devices for glass removal and a novel velcro releasing mechanism. Dr. Neubert explored the field of active disassembly further by looking at other trigger methods to initiate disassembly. His conceptual ideas to use the volume increase of frozen water to disconnect certain parts of a product or to use soluble fasteners, are described in his dissertation published in 2000. Barbara Willems elaborated on this research by focusing on the "pressure cells" described by Neubert. She developed a mathematical model to determine the optimal shape and dimensions of a pressure-activated fastener. Implemented in a product, these snap-fit-like fasteners enable dismantling through variations in ambient pressure. Since pressure variations are very unlikely to occur during the normal life-time of an electrical product, this trigger mechanism offers a more secure way of disassembly compared to temperature based triggering. Award-winning research in the 2013 Journal, Assembly Automation:, a world survey of smart materials used in active disassembly was conducted in 2012. This work was done by Dr. Chiodo and Dr. Jones. This is currently noted in the 'Active Disassembly Blog'. Dr. Chiodo's work continues to investigate AD employing materials 'made smart'. Some applications include interstitial layers, modular mechanisms, disassembly functions and other DfX eco-design strategies. Some of this work is described in the circular economy area, see the Circular Economy and original posting at Ellen MacAurther Foundation website. In Japan, U.S.A. and EU, various research departments in universities have investigated various strands of the technology. While there remains to be any mass-produced and implemented applications of the technology in industry, work continues to this end. Re-manufacturing research with AD Dr. Ijomah has been investigating the application of AD technology applying it to the re-manufacturing of electronic products. To date, the work has been conducted with Dr. Chiodo with some papers on the topic in various journals. Advantages of AD Most consumer products consist of a large number of parts and a wide range of materials. Disassembly at the end of a product's useful life is an inevitably complex and time-consuming operation to ensure effective separation of all component parts for subsequent re-use or recycling. AD techniques permit the automation or semi-automation of this process and thus make it more viable. The incorporation of AD and the implications of companies taking responsibility for the end of life recycling of their products will have long term cost implications for the consumer. Hurdles of AD There are currently significant obstacles preventing this technology to succeeding in the mass market. (cost, re-training, fin-cap/law-cap, arbitrage, legislative practice....to be continued. The use of smart materials A wide range of methods are being developed for use in AD. These methods generally require the use of smart materials which respond to a stimulus in order to change shape or size and thus facilitate the release of parts. The materials involved include shape memory polymers (SMP) and shape memory alloys (SMA). These materials offer significant shape changes at a range of transition temperatures, which are achieved by methods involving infrared, microwave, supercooling, chemicals and direct heat. The range of "trigger temperatures" for various smart materials means that it is possible to place the products in a heated environment where the outer elements become detached and then move on to a higher temperature zone where internal parts and sub-assemblies are dismantled. Recently, other materials employed for AD, by Dr. Chiodo, have been further investigated from their initial work started in 1996. The repertoire of 'smart materials' and other approaches continues to expand. Examples of AD fittings Screws, rivets, ribbons, bars and clips, specially designed to facilitate AD, can be manufactured from smart materials such as SMAs and SMPs. These will trigger at a pre-determined temperature, depending on the specific application. Notes and references External links (Active Disassembly) Smart materials Design for X
Active disassembly
[ "Materials_science", "Engineering" ]
1,589
[ "Smart materials", "Design", "Materials science", "Design for X" ]
4,405,547
https://en.wikipedia.org/wiki/Darzens%20reaction
The Darzens reaction (also known as the Darzens condensation or glycidic ester condensation) is the chemical reaction of a ketone or aldehyde with an α-haloester in the presence of a base to form an α,β-epoxy ester, also called a "glycidic ester". This reaction was discovered by the organic chemist Auguste Georges Darzens in 1904. Reaction mechanism The reaction process begins with deprotonation at the halogenated position. Because of the ester substituents, this carbanion is a resonance-stabilized enolate. This nucleophile next attacks the carbonyl reagent, forming a carbon–carbon bond. These two steps are similar to a base-catalyzed aldol reaction. The oxygen anion in this aldol-like product then SN2 attacks on the formerly-nucleophilic halide-bearing position, displacing the halide to form an epoxide. This reaction sequence is thus a condensation reaction since there is a net loss of HCl when the two reactant molecules join. If the starting halide is an α-halo amide, the product is an α,β-epoxy amide. If an α-halo ketone is used, the product is an α,β-epoxy ketone. Any sufficiently strong base can be used for the initial deprotonation. However, if the starting material is an ester, the alkoxide corresponding to the ester side-chain is commonly chosen in order to prevent complications due to potential acyl exchange side reactions. Stereochemistry Depending on the specific structures involved, the epoxide may exist in cis and trans forms. A specific reaction may give only cis, only trans, or a mixture of the two. The specific stereochemical outcome of the reaction is affected by several aspects of the intermediate steps in the sequence. The initial stereochemistry of the reaction sequence is established in the step where the carbanion attacks the carbonyl. Two sp3 (tetrahedral) carbons are created at this stage, which allows two different diastereomeric possibilities of the halohydrin intermediate. The most likely result is due to chemical kinetics: whichever product is easier and faster to form will be the major product of this reaction. The subsequent SN2 reaction step proceeds with stereochemical inversion, so the cis or trans form of the epoxide is controlled by the kinetics of an intermediate step. Alternately, the halohydrin can epimerize due to the basic nature of the reaction conditions prior to the SN2 reaction. In this case, the initially formed diastereomer can convert to a different one. This is an equilibrium process, so the cis or trans form of the epoxide is controlled by chemical thermodynamics—the product resulting from the more stable diastereomer, regardless of which one was the kinetic result. Alternative reactions Glycidic esters can also be obtained via nucleophilic epoxidation of an α,β-unsaturated ester, but that approach requires synthesis of the alkene substrate first whereas the Darzens condensation allows formation of the carbon–carbon connectivity and epoxide ring in a single reaction. Subsequent reactions The product of the Darzens reaction can be reacted further to form various types of compounds. Hydrolysis of the ester can lead to decarboxylation, which triggers a rearrangement of the epoxide into a carbonyl (4). Alternately, other epoxide rearrangements can be induced to form other structures. See also Johnson–Corey–Chaykovsky reaction Reformatskii reaction References Addition reactions Carbon-carbon bond forming reactions Epoxidation reactions Epoxides Name reactions
Darzens reaction
[ "Chemistry" ]
798
[ "Name reactions", "Carbon-carbon bond forming reactions", "Ring forming reactions", "Organic reactions" ]
4,406,265
https://en.wikipedia.org/wiki/Mercury%28II%29%20nitrate
Mercury(II) nitrate is an inorganic compound with the chemical formula . It is the mercury(II) salt of nitric acid . It contains mercury(II) cations and nitrate anions , and water of crystallization in the case of a hydrous salt. Mercury(II) nitrate forms hydrates . Anhydrous and hydrous salts are colorless or white soluble crystalline solids that are occasionally used as a reagents. Mercury(II) nitrate is made by treating mercury with hot concentrated nitric acid. Neither anhydrous nor monohydrate has been confirmed by X-ray crystallography. The anhydrous material is more widely used. Uses Mercury(II) nitrate is used as an oxidizing agent in organic synthesis, as a nitrification agent, as an analytical reagent in laboratories, in the manufacture of felt, and in the manufacture of mercury fulminate. An alternative qualitative Zeisel test can be done with the use of mercury(II) nitrate instead of silver nitrate, leading to the formation of scarlet red mercury(II) iodide. Health information Mercury compounds are highly toxic. The use of this compound by hatters and the subsequent mercury poisoning of said hatters is a common theory of where the phrase "mad as a hatter" came from. See also Mercury The Hatter Mercury poisoning Gilding References External links ATSDR - Toxic Substances Portal - Mercury (11/14/2013) ATSDR - Public Health Statement: Mercury (11/14/2013) ATSDR - ALERT! Patterns of Metallic Mercury Exposure, 6/26/97 (link not traceable 11/14/2013) ATSDR - Medical Management Guidelines for Mercury (11/14/2013) ATSDR - Toxicological Profile: Mercury (11/14/2013) Safety data (MSDS) (link not traceable 11/14/2013) Mercuric Nitrate (ICSC) Mercury Mercury Information Packages How to Make Good Mercury Electrical Connections, Popular Science monthly, February 1919, Unnumbered page, Scanned by Google Books: https://books.google.com/books?id=7igDAAAAMBAJ&pg=PT14 Mercury(II) compounds Nitrates Oxidizing agents
Mercury(II) nitrate
[ "Chemistry" ]
470
[ "Nitrates", "Redox", "Oxidizing agents", "Salts" ]
4,408,311
https://en.wikipedia.org/wiki/ECTFE
ECTFE (ethylene-chlorotrifluoroethylene) is an alternating copolymer of ethylene and chlorotrifluoroethylene. It is a semi-crystalline fluoropolymer (a partly fluorinated polymer), with chemical corrosion resistance properties. Physical and chemical properties ECTFE (ethylene chlorotrifluoroethylene) is a polymer known for its chemical resistance, making it suitable for various industrial applications. It is resistant to acids at high concentrations/temperatures, caustic media, oxidizing agents, and many solvents, similar to PTFE (polytetrafluoroethylene). One of the key properties of ECTFE is its permeation resistance to large molecules, which is generally slow and not significant in practical applications. Small molecules, however, may permeate through the polymer matrix. In lining or coating applications using ECTFE, permeability of certain small molecules determines the lifetime of anti-corrosion protection. Small molecules such as H2O, O2, Cl2, H2S, HCl, HF, HBr, N2, H2, and CH3OH are relatively mobile in the polymer matrix and lead to measurable effects. This permeation resistance is particularly critical in lining and coating applications, where the material is used to protect underlying layers, such as fiber-reinforced plastic (FRP) or steel, from corrosive substances. The polymer's resistance to permeation is attributed to the presence of chlorine atoms in the polymer chain, which occupies free volume and restricts the movement of small molecules through the material. ECTFE finds applications in various industries due to its favorable chemical resistance properties, providing durable and reliable protection in harsh environments. ECTFE has a continuous usage temperature range between –76°C and +150°C (–105°F to +300°F). It has strong impact resistance and a Young's modulus in the range of 1700 MPa, allowing for self-standing items and pressure piping systems. The polymer maintains high impact strength in cryogenic applications. In terms of fire resistance, ECTFE shows a limiting oxygen index of 52%. This value places it between the fully fluorinated polymers PTFE, PFA, and FEP with a limiting oxygen index of 95% and other partially fluorinated polymers like PVDF with a limiting oxygen index of 44% or ETFE with a limiting oxygen index of 30%. ECTFE acts as an electrical insulator, with high resistivity and a low dielectric constant as well as a low dissipation factor, allowing its use for wire and cable primary and secondary jacketing. ECTFE has good ultraviolet (UV) resistance, in particular against UV-A and UV-B. Films made of the polymer can be transparent. Applications ECTFE is applied in several ways: By electrostatic powder coating on metal surfaces By rotolining on metal surfaces rotolining grade Halar 6012F By sheet lining on metal surface or on FRP (glass fiber, carbon fibers, etc.) By extrusion or injection molding of self-standing items, in particular pressure pipes By rotomolding of self-standing items like tanks or other shapes (rotomolding grade) As a protective film using an adequate adhesive ECTFE powder is most commonly used in electrostatic powder coating. Such coatings have a typical thickness of 0.8 mm but can be applied up to 2 mm with a special grade for high build up. Extrusion of ECTFE fabric-backed sheets and subsequent fabrication into vessels, pipes or valves is done in the chemical industry. Thick sheets are compression molded and can be manufactured to 50 mm in thickness. They are used in the semiconductor industry for wet benches or machining other parts. The most common application of ECTFE is for corrosion protection, for which it is used in industries including: Bleaching towers in pulp and paper Sulfuric acid production and storage Flue gas treatment in particular in the SNOX and WSA processes Electrolysis collectors or drying towers in the chlorine industry Transport vessels for hazardous goods, in particular class 8 trucks Halogen-related industry (bromine, chlorine, fluorine) Acid handling (sulfuric acid, nitric acid, phosphoric acid, hydrogen halides, hydrogen sulfide, etc.) Mining applications, in particular high pressure heap leaching ECTFE has been widely used in the semiconductor industry for wet tool and tubing systems for lithographic chemicals. It is also used in the pharmaceutical industry. ECTFE is used for primary and secondary jacketing in specialty cables like data cables or self-regulating heating cables, applications where good fire resistance and electrical properties are key properties. It is also used for braiding in that field. ECTFE in the form of a monofilament fiber is used in flue gas treatment and in certain chemical processes. Unlike PTFE, ECTFE can be crimped, which allows its production in the form of nonwoven fibers with high surface area and porosity. Even though such material has low chemical reactivity, ECTFE in general has somewhat lower chemical resistance compared to PTFE. ECTFE is used for manufacturing gaskets to store liquid oxygen and other propellants for aerospace applications. See also BS 4994 ECTFE as a thermoplastic lining for dual laminate chemical process plant equipment RTP-1 ECTFE as a thermoplastic lining for dual laminate ASME stamped vessels References Plastics Fluoropolymers Copolymers Thermoplastics
ECTFE
[ "Physics" ]
1,185
[ "Amorphous solids", "Unsolved problems in physics", "Plastics" ]
4,408,405
https://en.wikipedia.org/wiki/FKM
FKM is a family of fluorocarbon-based fluoroelastomer materials defined by ASTM International standard D1418, and ISO standard 1629. It is commonly called fluorine rubber or fluoro-rubber. FKM is an abbreviation of Fluorine Kautschuk Material. All FKMs contain vinylidene fluoride as the common monomer, to which different other monomers are added for specific types and functionalities, fitting the desired application. Originally developed by DuPont (under the brand name Viton, now owned by Chemours), FKMs are today also produced by many other companies, including: Daikin (Dai-El), 3M (Dyneon), Solvay S.A. (Tecnoflon), HaloPolymer (Elaftor), Gujarat Fluorochemicals (Fluonox), and several Chinese manufacturers. Fluoroelastomers are more expensive than neoprene or nitrile rubber elastomers. They provide additional heat and chemical resistance. FKMs can be divided into different classes on the basis of either their chemical composition, their fluorine content, or their cross-linking mechanism. Types On the basis of their chemical composition FKMs can be divided into the following types: Type-1 FKMs are composed of vinylidene fluoride (VDF) and hexafluoropropylene (HFP). Copolymers are the standard type of FKMs showing a good overall performance. Their fluorine content is approximately 66 weight percent. Type-2 FKMs are composed of VDF, HFP, and tetrafluoroethylene (TFE). Terpolymers have a higher fluorine content compared to copolymers (typically between 68 and 69 weight percent fluorine), which results in better chemical and heat resistance. Compression set and low temperature flexibility may be affected negatively. Type-3 FKMs are composed of VDF, TFE, and perfluoromethylvinylether (PMVE). The addition of PMVE provides better low temperature flexibility compared to copolymers and terpolymers. Typically, the fluorine content of type-3 FKMs ranges from 62 to 68 weight percent. Type-4 FKMs are composed of propylene, TFE, and VDF. While base resistance is increased in type-4 FKMs, their swelling properties, especially in hydrocarbons, are worsened. Typically, they have a fluorine content of about 67 weight percent. Type-5 FKMs are composed of VDF, HFP, TFE, PMVE, and ethylene. Known for base resistance and high-temperature resistance to hydrogen sulfide. Cross-linking mechanisms There are three established cross-linking mechanisms used in the curing process of FKMs. Diamine cross-linking using a blocked diamine. In the presence of basic (alkaline) media, VDF is vulnerable to dehydrofluorination, which enables the addition of the diamine to the polymer chain. Typically, magnesium oxide is used to neutralize the resulting hydrofluoric acid and rearrange into magnesium fluoride and water. Although rarely used today, diamine curing provides superior rubber-to-metal bonding properties as compared with other cross-linking mechanisms. The diamine's capability to be hydrated makes the diamine cross-link vulnerable in aqueous media. Ionic cross-linking (dihydroxy cross-linking) was the next step in curing FKMs. This is today the most common cross-linking chemistry used for FKMs. It provides superior heat resistance, improved hydrolytic stability and better compression set than diamine curing. In contrast to diamine curing, the ionic mechanism is not an addition mechanism but an aromatic nucleophilic substitution. Dihydroxy aromatic compounds are used as the cross-linking agent, and quaternary phosphonium salts are typically used to accelerate the curing process. Peroxide cross-linking was originally developed for type 3 FKMs containing PMVE as diamine and bisphenolic cross-linking systems can lead to cleavage in a polymer backbone chain containing PMVE. While diamine and bisphenolic cross-linking are ionic reactions, peroxide cross-linking is a free-radical mechanism. Though peroxide cross-links are not as thermally stable as bisphenolic cross-links, they normally are the system of choice in aqueous media and nonaqueous electrolyte media. Properties Fluoroelastomers provide excellent high temperature (up to 500°F or 260°C) and aggressive fluids resistance when compared with other elastomers, while combining the most effective stability to many sorts of chemicals and fluids such as oil, diesel, ethanol mix or body fluid. The performance of fluoroelastomers in aggressive chemicals depends on the nature of the base polymer and the compounding ingredients used for molding the final products (e.g. o-rings). Some formulations are generally compatible with hydrocarbons, but incompatible with ketones such as acetone and methyl ethyl ketone, ester solvents such as ethyl acetate, amines, and organic acids such as acetic acid. They can be easily distinguished from many other elastomers because of their high density of over 1800 kg/m3, significantly higher than most types of rubber. Applications Because of their outstanding performance they find use in a number of sectors, including the following: Chemical process and petroleum refining, where they are used for seals, pumps, gaskets and so on, due to their resistance to chemicals; Analysis and process instruments: separators, diaphragms, cylindrical fittings, hoops, gaskets, etc. Semiconductor manufacturing; Food and pharmaceutical, because of their low degradation, also in contact with fluids; Aviation and aerospace: high operating temperatures and high altitudes require superior heat and low-temperature resistance. They are suitable for the production of wearables, due to low wear and discoloration even during prolonged lifetimes in contact with skin oils and frequent exposure to light, while guaranteeing high comfort and stain resistance; The automotive industry represents their main application sector, where constant reach for higher efficiencies push manufacturers towards high-performing materials. An example are FKM o-rings used as an upgrade to the original neoprene seals on Corvair pushrod tubes that deteriorated under the high heat produced by the engine, allowing oil leakage. FKM tubing or lined hoses are commonly recommended in automotive and other transportation fuel applications when high concentrations of biodiesel are required. Studies indicate that types B and F (FKM- GBL-S and FKM-GF-S) are more resistant to acidic biodiesel. (This is because biodiesel fuel is unstable and oxidizing.) FKM O-rings have been used safely for some time in scuba diving by divers using gas blends referred to as nitrox. FKMs are used because they have a lower probability of catching fire, even with the increased percentages of oxygen found in nitrox. They are also less susceptible to decay under increased oxygen conditions. While these materials have a wide range of applications, their cost is prohibitive when compared to other types of elastomers, meaning that their adoption must be justified by the need for outstanding performance (as in the aerospace sector) and is inadvisable for low-cost products. FKM/butyl gloves are highly impermeable to many strong organic solvents that would destroy or permeate commonly used gloves (such as those made with nitriles). Precautions At high temperatures or in a fire, fluoroelastomers decompose and may release hydrogen fluoride. Any residue must be handled using protective equipment. See also Magnesium/Teflon/Viton FFKM, perfluoro-elastomers FEPM, tetrafluoro ethylene/propylene elastomers PVDF, polyvinylidene fluoride References Overview of different types and their applications. External links Properties of Elastomers - Chemical Resitancelist (PDF; 0,6 MB) Designing with Fluoroelastomers (PDF; 0,8 MB) Organofluorides Elastomers Materials science Fluoropolymers
FKM
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,751
[ "Applied and interdisciplinary physics", "Synthetic materials", "Materials science", "Elastomers", "nan" ]
4,410,019
https://en.wikipedia.org/wiki/In-mould%20decoration
In-mould decoration, a special type of plastic molding, is used for decorating plastic surfaces with color and/or with an abrasion resistant coat. Principle A carrier film is placed inside the opened mould. It carries the dried paint layers which are to be transferred to the plastic part, with the paint facing the gate. After filling with plastic the paint adheres to the plastic, and is removed from the carrier when opening the mould. For the next cycle the carrier film is advanced, positioning the next area to be transferred. Mold construction The mould must be constructed so that the back side of the carrier film rests against a flat wall. The plastic film can be bent slightly, but the more it is bent, the greater the risk of wrinkles. The filling only takes place on the other side, the side of the carrier with the material to be transferred. The part has to stay on the side of the gate. The tips of the ejectors are usually bent slightly to ensure the parts stick to them. Carrier film feeder To place the carrier quickly in the mold, the carrier film is wound on a coil. The full supply roll is above the mold, and the take-up roll beneath. Usually the film feeder is attached to the moving side of the mold, to enable de-molding when opening the mold. Cleaning of the parts Remnants from the paint ("flakes") must be removed from the parts. This is usually done by rotating brushes. See also In-mould labelling References Ornaments Plastics Plastics industry
In-mould decoration
[ "Physics" ]
317
[ "Amorphous solids", "Unsolved problems in physics", "Plastics" ]
4,410,085
https://en.wikipedia.org/wiki/Intermediate%20eXperimental%20Vehicle
The Intermediate eXperimental Vehicle (IXV) is a European Space Agency (ESA) experimental suborbital re-entry vehicle. It was developed to serve as a prototype lifting body orbital return vehicle to validate the ESA's work in the field of reusable orbital return vehicles. The European Space Agency has a program called Future Launchers Preparatory Programme (FLPP), which made a call for submissions for a reusable spaceplane. One of the submissions was by the Italian Space Agency, that presented their own Programme for Reusable In-orbit Demonstrator in Europe (PRIDE program) which went ahead to develop an initial test vehicle, Pre-X, followed the prototype named Intermediate eXperimental Vehicle (IXV) and the consequential Space Rider that inherits technology from its prototype IXV. On 11 February 2015, the IXV conducted its first 100-minute suborbital space flight, successfully completing its mission upon landing intact on the surface of the Pacific Ocean. The vehicle is the first ever lifting body to perform full atmospheric reentry from orbital speed. Past missions have flight tested either winged bodies, which are highly controllable but also very complex and costly, or capsules, which are difficult to control but offer less complexity and lower cost. Development Background During the 1980s and 1990s, there was significant international interest in the development of reusable launch platforms and reusable spacecraft, particularly in respect to spaceplanes, perhaps the most high-profile examples of these being the American Space Shuttle and Soviet Buran programmes. The national space agencies of European nations, such as France's Centre National d'Études Spatiales (CNES) and Germany's German Aerospace Center (DLR), worked on their own designs during this era, the most prominent of these to emerge being the Hermes spaceplane. Development of the Hermes programme, which was backed by the European Space Agency (ESA) for several years, was ultimately terminated in 1992 prior to any flights being performed in favour of a partnership arrangement with the Russian Aviation and Space Agency (RKA) to use the existing Soyuz spacecraft instead. While work on the development of the Hermes vehicle was cancelled during the early 1990s, the ESA maintained its strategic long-term objective to indigenously develop and eventually deploy similar reusable space vehicles. Accordingly, in support of this goal, the ESA embarked upon a series of design studies on different experimental vehicle concepts as well as to refine and improve technologies deemed critical to future reentry vehicles. In order to test and further develop the technologies and concepts produced by these studies, there were clear needs to accumulate practical flight experience with reentry systems, as well as to maintain and expand upon international cooperation in the fields of space transportation, exploration, and science. Out of these desires emerged the Future Launchers Preparatory Programme (FLPP), an ESA-headed initiative conceived and championed by a number of its member states, which provided a framework for addressing the challenges and development of the technology associated with reentry vehicles. It was recognised that, in order for significant progress to be made, FLPP would require the production and testing of a prototype reentry vehicle that drew on their existing research, technologies, and designs. By adopting a step-by-step approach using a series of test vehicles prior to the development of a wider series of production vehicles, this approach was seen to reduce the risk and to allow for the integration of progressively more sophisticated developments from the early relatively-low-cost missions. In line with this determination, during early 2005, the Intermediate eXperimental Vehicle (IXV) project was formally initiated by the Italian Space Agency and the Italian Aerospace Research Centre under an Italian programme named PRIDE (Programme for Reusable In-orbit Demonstrator in Europe) Their main industrial contractor was Next Generation Launcher Prime SpA (NGLP) in Italy. The latter organisation is a joint venture entity comprising two major European aerospace companies, Astrium and Finmeccanica. The PRIDE programme had the support of various national space agencies, including the European Space Research and Technology Centre, Italian Space Agency (ASI), French space agency CNES, and Germany's DLR; by November 2006, the IXV was supported by 11 Member States: Austria, Belgium, France, Germany, Ireland, Italy, Portugal, Spain, Sweden, Switzerland, and the Netherlands. Of these, Italy emerged as the principal financial backer of the IXV programme. Selection and pre-launch testing The IXV project benefitted from and harnessed much of the research data and operational principles from many of the previously conducted studies, especially from the successful Atmospheric Reentry Demonstrator (ARD), which was test-flown during 1998. Early on, during the mission definition and design maturity stages of the project, thorough comparisons were conducted again between existing ESA and national concepts against shared criteria, aimed at evaluating the experiment requirements (technology and systems), programme requirements (technology readiness, development schedule and cost) and risk mitigation (feasibility, maturity, robustness, and growth potential). The selected baseline design, a slender lifting body configuration, drew primarily upon the CNES-led Pre-X the ESA's ARD vehicles. Development work quickly proceeded through the preliminary design definition phase, reaching a system requirements review by mid-2007. On 18 December 2009, the ESA announced the signing of a contract with Thales Alenia Space, valued at , to cover 18 months of preliminary IXV work. In 2011, the total estimated cost for the IXV project was reportedly . During late 2012, the IXV's subsonic parachute system was tested at the Yuma Proving Ground in Arizona, United States. Shortly thereafter, a series of water impact tests were conducted at Consiglio Nazionale delle Ricerche's INSEAN research tank near Rome, Italy. On 21 June 2013, an IXV test vehicle was dropped from an altitude of in the Salto di Quirra range off Sardinia, Italy. The purpose of this test-drop was to validate the vehicle's water-landing system, including the subsonic parachute, flotation balloons, and beacon deployment. A small anomaly was encountered during the inflation of the balloons; however, all of the other systems performed as expected. Following the drop-test, the vehicle was retrieved for further analysis. On 23 June 2014, the recovery ship Nos Aries conducted a training exercise involving a single IXV test article off the coast of Tuscany. During June 2014, the IXV test vehicle arrived at the ESTEC Technical Centre in Noordwijk, The Netherlands, to undergo a test campaign to confirm its flight readiness in anticipation of a flight on a Vega rocket, which was by that point scheduled to occur during November of that year. Design The Intermediate eXperimental Vehicle (IXV) is a prototype uncrewed reusable spaceplane —and the precursor of the next model called Space Rider. According to the ESA, the Intermediate part of its name is due to the shape of the vehicle not necessarily being representative of the envisioned follow-on production spacecraft. It possesses a lifting body arrangement which lacks wings of any sort, resulting in a lift to drag ratio (L/D) of 0.7 during the reentry. The size and shape is balanced between the need to maximise internal volume to accommodate experimental payloads while keeping within the mass limits of the Vega launcher and favourable centre of gravity. The vehicle purposefully includes several key technologies of interest to the ESA, including its thermal protection system and the presence of active aerodynamic control surfaces. Control and manoeuvrability of the IXV is provided by a combination of these aerodynamic surfaces (comprising a pair of movable flaps) and thrusters throughout its full flight regime, which includes flying at hypersonic speeds. A key role for the IXV is the gaining of data and experience in aerodynamically controlled reentry, which has been claimed by the ESA to represent significant advances on earlier ballistic and quasi-ballistic techniques previously employed. Throughout each mission, representative reentry performance data is recorded in order to investigate aerothermodynamic phenomena and to validate system design tools and ground verification methods, which in turn supports future design efforts. Reentry is accomplished in a nose-high attitude, similar to the NASA-operated Space Shuttle; during this phase of flight, manoeuvring of the spaceplane is accomplished by rolling out-of-plane and then lifting in that direction, akin to a conventional aircraft. Landing is accomplished by an arrangement of parachutes, which are ejected during the descent through the top of the vehicle; additionally, seconds prior to landing, a series of airbags are inflated to soften the landing. Another key ESA objective for the IXV was the verification of both its structure and its advanced thermal protection measures, specifically their performance during the challenging conditions present during reentry. The underside is covered by ceramic thermal protection panels composed of a blend of carbon fiber and silicon carbide directly fixed to the spaceplane's structure, while ablative materials comprising a cork and silicone-based composite material coat the vehicle's upper surfaces. The airframe was based on a traditional hot-structure/cold-structure arrangement, relying upon a combination of advanced ceramic and metallic assemblies, insulating materials, as well as the effective design of assorted attachments, junctions and seals; the role played by advanced navigation and control techniques was also deemed to be of high importance. The IXV is supported on-orbit by a separate manoeuvring and support module, which is largely similar to the Resource Module that had been intended for use by the cancelled Hermes shuttle. The avionics of the IXV are controlled by a LEON2-FT microprocessor and are interconnected by a MIL-STD-1553B serial bus. As an experimental vehicle primarily intended to gather data, various assorted sensors and monitoring equipment were present and operational throughout the full length of the flight in order to gather data to support the evaluation effort, including the verification of the vehicle's critical reentry technologies. The recorded data covered various elements of the IXV's flight, including its guidance, navigation, and control systems, such as Vehicle Model Identification (VMI) measurements for post-flight reconstruction of the spacecraft's dynamic behaviour and environment, as well as the mandatory core experiments regarding its reentry technologies. Additionally, the IXV will typically carry complementary passenger experiments which, while not having been directly necessary to its mission success, serve to increase the vehicle's return on investment; according to the ESA, in excess of 50 such proposals had been received from a mixture of European industries, research institutes and universities, many having benefits to future launcher programmes (such as potential additional methods for guidance, navigation, control, structural health monitoring, and thermal protection), space exploration, and scientific value. Throughout each mission, telemetry is broadcast to ground controllers to monitor the vehicle's progress; however, phenomenon such as the build-up of plasma around the spaceplane during its re-entry has been known to block radio signals. The IXV is the precursor of the next model named Space Rider, also developed under the Italian PRIDE programme for ESA. Flight Test During 2011, it was reported that the IXV was planned to conduct its maiden flight as early as 2013; however, the vehicle was later rescheduled to perform its first launch using the newly developed Vega launcher during late 2014. This initial launch window was ultimately missed due to unresolved range safety concerns. Following some delays, on 11 February 2015 the IXV was successfully launched into a suborbital trajectory by a Vega rocket on the VV04 mission. Having launched at 08:40am local time, the vehicle separated from the Vega launch vehicle at 333 km altitude and ascended to 412 km, after which it commenced a controlled descent towards beginning its reentry at 120 km altitude, travelling at a recorded speed of 7.5 km/s, identical to a typical re-entry path to be flown by low Earth orbit (LEO) spacecraft. Following re-entry, the IXV glided over the Pacific Ocean prior to the opening of its landing parachutes, which were deployed in order to slow down the craft's descent, having flown over 7300 km from the beginning of its reentry. The vehicle descended to the surface of the Pacific Ocean, where it was subsequently recovered by the Nos Aries ship; analysis of both the spacecraft itself and recorded mission data took place. Jean-Jacques Dordain, then-director general of the ESA, stated of the mission: "It couldn't have been better, but the mission itself is not yet over... it will move the frontiers of knowledge further back concerning aerodynamics, thermal issues, and guidance and navigation of such a vehicle – this lifting body". Future Plans Following on from the completion of the reportedly 'flawless' test flight, ESA officials decided that an additional test flight should be performed during the 2019-2020 timeframe. During this mission, the IXV had been envisioned to land in a different manner, descending directly onto a runway instead of performing a splashdown landing as before; this approach is to be achieved either via the installation of a parafoil, or by the adoption of landing gear. The planning for the second spaceflight was originally to begin during March 2015, while design work on the modified vehicle was to commence during mid 2015. Transition to Space Rider In the ESA December 2016 Science Budget funding was approved by the Ministerial Council for the next IXV flight in the form of the commercialised Space Rider orbital vehicle. Following design reviews in 2018 and 2019, a full size mockup was to be dropped from a balloon in 2019 and will have a first flight atop a Vega-C in 2020/2021. It will then conduct approximately 5 science flights at 6 to 12-month intervals before becoming commercially available from 2025 at a cost of $40,000 per kg of payload for launch, operation, and return to Earth. The Space Rider mini shuttle will have a length of between 4 and 5 meters, a payload capacity of 800 kg, a total mass of 2,400 kg, and endurance of 2 to 6-month missions at a 400 km orbit before returning to Earth and being reflown within 4 months. The Vega-C rocket's 4th stage payload dispenser AVUM acts as the service module for the shuttle, providing orbital manoeuvring and braking, power, and communications before being jettisoned for re-entry. The AVUM service module replaces the integrated IXV Propulsion Module and frees 0.8 m3 of internal space in the vehicle for a payload bay. The Space Rider is similar in operation to the US X-37B but half the X37's length and a fifth the X37's mass and payload capacity, which will make it the smallest and lightest spaceplane to ever fly. Payload doors will be opened on achieving orbit exposing instruments and experiments to space before being closed for landing. In December 2020, ESA signed contracts with co-prime contractors Thales Alenia Space and Avio for delivery of the Space Rider flight model. The first flight is now scheduled in late 2025. Specifications See also 2015 in spaceflight Atmospheric Reentry Demonstrator (ARD) - ESA reentry testbed flown in 1998 European eXPErimental Re-entry Testbed (EXPERT) - research programme developing materials used in IXV, never flown Future Launchers Preparatory Programme - parent programme for IXV Hopper - an earlier ESA project for a crewed spaceplane, cancelled HYFLEX (Hypersonic Flight Experiment) - equivalent Japanese spaceplane demonstrator for HOPE-X developed and flown by NASDA in 1996 RLV-TD - Indian reusable technology validation test bed, in development by ISRO Space Rider - orbital spaceplane developed from IXV technologies Aurora programme References Further reading External links Official IXV website IXV Twitter profile Full replay from liftoff to splashdown for IXV reentry mission, ESA Multimedia Gallery (11 February 2015) IXV first results press conference, ESA Space in Videos (16 June 2015) ESA's IXV reentry vehicle mission, ESA Multimedia Gallery (2012 animation) IXV: learning to come back from Space, IXV Video News Release VNR ESA's Intermediate eXperimental Vehicle, ESA Multimedia Gallery (2008 animation) ESA Euronews: "Splashdown – the re-entry test" (2013-08-22). CNES reusable atmospheric re-entry vehicle: PRE-X Atmospheric entry CNES European Space Agency satellites Hypersonic aircraft 2010s international experimental aircraft Spacecraft launched in 2015 Spaceplanes Suborbital spaceflight Spacecraft launched by Vega rockets Technology demonstrations
Intermediate eXperimental Vehicle
[ "Engineering" ]
3,419
[ "Atmospheric entry", "Aerospace engineering" ]
4,410,993
https://en.wikipedia.org/wiki/Archicad
Archicad is an architectural building information modeling (BIM) computer-aided design (CAD) software for Mac and Windows developed by the Hungarian company Graphisoft. Archicad offers computer aided solutions for common aspects of aesthetics and engineering during the design process of the built environment: buildings, interiors, urban areas, etc. History Development of Archicad began in 1982 for the Apple Lisa, the predecessor of the original Apple Macintosh. Following its launch in 1987, with Graphisoft's "Virtual Building" concept, Archicad became regarded by some as the first implementation of BIM. However, Archicad founder Gábor Bojár has acknowledged to Jonathan Ingram in an open letter that Sonata "was more advanced in 1986 than Archicad at that time", adding that it "surpassed already the matured definition of 'BIM' specified only about one and a half decade later". Archicad has been recognized as the first CAD product on a personal computer able to create both 2D and 3D geometry, as well as the first commercial BIM product for personal computers and considered "revolutionary" for the ability to store large amounts of information within the 3D model. Product overview Archicad is a complete design suite with 2D and 3D drafting, visualization and other building information modeling functions for architects, designers and planners. A wide range of software applications are integrated in Archicad to cover most of the design needs of an architectural office: 2D modeling CAD software – drawing tools for creating accurate and detailed technical drawings 3D Modeling software – a 3D CAD interface specially developed for architects capable of creating various kind of building forms Architectural rendering and Visualization software – a high performance rendering tool to produce photo-realistic pictures or videos Desktop publishing software – with similar features to mainstream DTP software to compose printed materials using technical drawings pixel-based images and texts Document management tool – a central data storage server with remote access, versioning tool with backup and restore features Building information modeling software – not just a collection of the above-mentioned applications with an integrated user interface but a novel approach to building design called BIM Features Using parametric objects Archicad allows the user to work with data-enhanced parametric objects, often called smart objects by users. This differs from the operational style of other CAD programs created in the 1980s. The product allows the user to create a virtual building with virtual structural elements like walls, slabs, roofs, doors, windows and furniture. A large variety of pre-designed, customizable objects come with the program. Archicad software offers the flexibility to work with a 2D or 3D representation on the screen. Although the program's database stores data in three dimensions, 2D drawings can be exported at any time. Plans, elevations, and sections are generated from the 3D virtual building model and can be updated constantly by rebuilding the view. Detail drawings are based on enlarged portions of the model, with 2D detail added in. Collaboration and remote access Archicad released its first file exchange based Teamwork solution in its version 5.1 in 1997, which allowed more architects to work on the same building model simultaneously. A fully rewritten Teamwork 2.0 solution with a new database approach came out in version 13 in 2009 named Graphisoft BIM Server. Since only the changes and differences are sent to the central storage, this system allows remote access to the same project over the Internet, thus allowing worldwide project collaboration and coordination. In 2014, with the introduction of the BIMcloud, better integration is provided with standard IT solutions: browser-based management, LDAP connection, and HTTP/HTTPS based communication. Also, new scalability options are available, by allowing multi-server layouts to be created, with optional caching servers. APIs and scripting Third-party vendors and some manufacturers of architectural products have compiled libraries of architectural components for use in Archicad. The program includes Geometric Description Language (GDL) used to create new components. Also, API (Application Programming Interface) and ODBC database connections are supported for third party Add-On developers. Via direct API links to 4D and 5D software such as Vico Office Suite or Tocoman iLink, the Archicad model can be exported to BIM-based cost estimation and scheduling. Archicad is also directly linked via API to Solibri's Model checking and quality assurance tools. In addition, Graphisoft provides a direct link to Grasshopper 3D enabling a visual programming environment for parametric modelling and design. Data interchange Archicad can import and export DWG, DXF, Industry Foundation Classes (IFC) and BIM Collaboration Format (BCF) files among others. Graphisoft is an active member of BuildingSMART (formerly the International Alliance for Interoperability, IAI), an industry organization that publishes standards for file and data interoperability for built environment software. Graphisoft was one of the founders of the Open BIM concept, which supports 3D BIM data exchange between the different design disciplines on open-source platforms. Archicad can also export the 3D model and its corresponding 2D drawings to BIMx format which can be viewed on several desktop and mobile platforms with native BIMx viewers. License types and localizations License types Commercial, educational and fully functional 30-day trial versions can be installed with the same installer. As long as no hardware protection is present or the software is not activated with a trial or an educational serial number, Archicad can be launched in demo mode. The installer files can not be downloaded without registration. The educational or trial serial numbers can be obtained after registration. Commercial version is protected by either a hardware protection key or a software key. If no key is present, Archicad switches to Demo mode where Save, Copy and Teamwork features are disabled (printing/plotting is still enabled, even in case the project file has been modified since opening). START Edition is a streamlined version of Archicad for smaller practices or offices who don't need collaboration and advanced rendering functionality. Educational versions are protected by serial numbers. Saved files in Archicad educational versions are compatible with commercial Archicad versions, but carry a watermark identifying the license type. Once a project has been edited with an Educational version, the watermark will persist in the file. Trial version is a 30-day fully functional version in which you can save, print and publish projects. File formats are fully compatible with the commercial version once the copy of Archicad is used with a commercial license. Otherwise the files created by a trial version are only readable by the same Archicad instance with which they were created. The trial version is protected by a serial number. Languages and localizations Archicad is available in many localized versions. In addition to a translated user interface and documentation, these versions have a set of parametric objects (object libraries) developed considering the specific requirements of the regional market, and different default values for object properties, menu arrangements, etc. Extensions Various free and commercial add-on products and extensions add extra functionality to Archicad or provide further data exchange possibilities with other software applications. Some of these extensions are developed by Graphisoft, such as the freely available Trimble SketchUp, Google Earth or Maxon's Cinema 4D import/export add-ons or other extensions sold separately such as Graphisoft MEP Modeler, Graphisoft EcoDesigner or Graphisoft Virtual Building Explorer; while there are several add-ons provided by third-party vendors, such as Cigraph or Cadimage. Version history For a detailed version history see the help center article. See also Autodesk Revit List of BIM software References External links , Graphisoft Archicad installer downloads Graphisoft Help Center GDL / BIM developer for Archicad Lachmi Khemlani wrote a review of Archicad 17 for AEC bytes Facility management software for Archicad 3D graphics software BIM software Building information modeling Computer-aided design software Computer-aided design software for Windows MacOS MacOS computer-aided design software
Archicad
[ "Engineering" ]
1,652
[ "Building engineering", "Building information modeling" ]
4,411,204
https://en.wikipedia.org/wiki/Ecological%20stoichiometry
Ecological stoichiometry (more broadly referred to as biological stoichiometry) considers how the balance of energy and elements influences living systems. Similar to chemical stoichiometry, ecological stoichiometry is founded on constraints of mass balance as they apply to organisms and their interactions in ecosystems. Specifically, how does the balance of energy and elements affect and how is this balance affected by organisms and their interactions. Concepts of ecological stoichiometry have a long history in ecology with early references to the constraints of mass balance made by Liebig, Lotka, and Redfield. These earlier concepts have been extended to explicitly link the elemental physiology of organisms to their food web interactions and ecosystem function. Most work in ecological stoichiometry focuses on the interface between an organism and its resources. This interface, whether it is between plants and their nutrient resources or large herbivores and grasses, is often characterized by dramatic differences in the elemental composition of each part. The difference, or mismatch, between the elemental demands of organisms and the elemental composition of resources leads to an elemental imbalance. Consider termites, which have a tissue carbon:nitrogen ratio (C:N) of about 5 yet consume wood with a C:N ratio of 300–1000. Ecological stoichiometry primarily asks: why do elemental imbalances arise in nature? how is consumer physiology and life-history affected by elemental imbalances? and what are the subsequent effects on ecosystem processes? Elemental imbalances arise for a number of physiological and evolutionary reasons related to the differences in the biological make up of organisms, such as differences in types and amounts of macromolecules, organelles, and tissues. Organisms differ in the flexibility of their biological make up and therefore in the degree to which organisms can maintain a constant chemical composition in the face of variations in their resources. Variations in resources can be related to the types of needed resources, their relative availability in time and space, and how they are acquired. The ability to maintain internal chemical composition despite changes in the chemical composition and availability of resources is referred to as "stoichiometric homeostasis". Like the general biological notion of homeostasis, elemental homeostasis refers to the maintenance of elemental composition within some biologically ordered range. Photoautotrophic organisms, such as algae and vascular plants, can exhibit a very wide range of physiological plasticity in elemental composition and thus have relatively weak stoichiometric homeostasis. In contrast, other organisms, such as multicellular animals, have close to strict homeostasis and they can be thought of as having distinct chemical composition. For example, carbon to phosphorus ratios in the suspended organic matter in lakes (i.e., algae, bacteria, and detritus) can vary between 100 and 1000 whereas C:P ratios of Daphnia, a crustacean zooplankton, remain nearly constant at 80:1. The general differences in stoichiometric homeostasis between plants and animals can lead to large and variable elemental imbalances between consumers and resources. Ecological stoichiometry seeks to discover how the chemical content of organisms shapes their ecology. Ecological stoichiometry has been applied to studies of nutrient recycling, resource competition, animal growth, and nutrient limitation patterns in whole ecosystems. The Redfield ratio of the world's oceans is one very famous application of stoichiometric principles to ecology. Ecological stoichiometry also considers phenomena at the sub-cellular level, such as the P-content of a ribosome, as well as phenomena at the whole biosphere level, such as the oxygen content of Earth's atmosphere. To date the research framework of ecological stoichiometry stimulated research in various fields of biology, ecology, biochemistry and human health, including human microbiome research, cancer research, food web interactions, population dynamics, ecosystem services, productivity of agricultural crops and honeybee nutrition. Consumer stoichiometry and food webs The study of elemental ratios (i.e., C:N:P) within the tissues of organisms can be used to understand how organisms respond to changes in resource quality and quantity. For instance, in aquatic ecosystems, nitrogen and phosphorus pollution within streams, often due to agricultural activities, can increase the amount of N and P available to primary producers. This release in the limitation of N and P can impact the abundance, growth rates, and biomass of primary producers within the stream. This change in primary production can trickle through the food web via bottom-up processes and impact the stoichiometry of organisms, limiting elements, and biogeochemical cycling of streams. In addition, bottom-up changes in elemental availability can influence the morphology, phenology, and physiology of organisms that will be discussed below. The focus of this article is on aquatic systems; however, similar processes related to ecological stoichiometry can be applied in the terrestrial environment, as well. Invertebrate stoichiometry The demands for carbon, nitrogen and phosphorus at specific ratios by invertebrates can change at different life stages within invertebrate life history. The growth rate hypothesis (GRH) addresses this phenomenon and states that the demands for phosphorus increase during active growth phases to produce P-rich nucleic acids in biomass production and are reflected in the P content of the consumer.  During early growth stages, or earlier instars, invertebrates may have higher demands for N and P enriched resources to fuel the ribosomal production of proteins and RNA. At later stages, the demand for particular elements may shift as they are no longer actively growing as rapidly or generating protein rich biomass. Growth rates of invertebrate organisms can also be limited by the resources that are available to them. See also Energy flow (ecology) Energy homeostasis References Ecology Stoichiometry
Ecological stoichiometry
[ "Chemistry", "Biology" ]
1,201
[ "Stoichiometry", "Chemical reaction engineering", "Ecology", "nan" ]
4,411,416
https://en.wikipedia.org/wiki/SUMO%20protein
In molecular biology, SUMO (Small Ubiquitin-like Modifier) proteins are a family of small proteins that are covalently attached to and detached from other proteins in cells to modify their function. This process is called SUMOylation (pronounced soo-muh-lā-shun and sometimes written sumoylation). SUMOylation is a post-translational modification involved in various cellular processes, such as nuclear-cytosolic transport, transcriptional regulation, apoptosis, protein stability, response to stress, and progression through the cell cycle. In human proteins, there are over 53,000 SUMO binding sites, making it a substantial component of fundamental biology. SUMO proteins are similar to ubiquitin and are considered members of the ubiquitin-like protein family. SUMOylation is directed by an enzymatic cascade analogous to that involved in ubiquitination. In contrast to ubiquitin, SUMO is not used to tag proteins for degradation. Mature SUMO is produced when the last four amino acids of the C-terminus have been cleaved off to allow formation of an isopeptide bond between the C-terminal glycine residue of SUMO and an acceptor lysine on the target protein. SUMO family members often have dissimilar names; the SUMO homologue in yeast, for example, is called SMT3 (suppressor of mif two 3). Several pseudogenes have been reported for SUMO genes in the human genome. Function SUMO modification of proteins has many functions. Among the most frequent and best studied are protein stability, nuclear-cytosolic transport, and transcriptional regulation. Typically, only a small fraction of a given protein is SUMOylated and this modification is rapidly reversed by the action of deSUMOylating enzymes. SUMOylation of target proteins has been shown to cause a number of different outcomes including altered localization and binding partners. The SUMO-1 modification of RanGAP1 (the first identified SUMO substrate) leads to its trafficking from cytosol to nuclear pore complex. The SUMO modification of ninein leads to its movement from the centrosome to the nucleus. In many cases, SUMO modification of transcriptional regulators correlates with inhibition of transcription. One can refer to the GeneRIFs of the SUMO proteins, e.g. human SUMO-1, to find out more. There are 4 confirmed SUMO isoforms in humans; SUMO-1, SUMO-2, SUMO-3 and SUMO-4. At the amino acid level, SUMO1 is about 50% identical to SUMO2. SUMO-2/3 show a high degree of similarity to each other and are distinct from SUMO-1. SUMO-4 shows similarity to SUMO-2/3 but differs in having a Proline instead of Glutamine at position 90. As a result, SUMO-4 isn't processed and conjugated under normal conditions, but is used for modification of proteins under stress-conditions like starvation. During mitosis, SUMO-2/3 localize to centromeres and condensed chromosomes, whereas SUMO-1 localizes to the mitotic spindle and spindle midzone, indicating that SUMO paralogs regulate distinct mitotic processes in mammalian cells. One of the major SUMO conjugation products associated with mitotic chromosomes arose from SUMO-2/3 conjugation of topoisomerase II, which is modified exclusively by SUMO-2/3 during mitosis. SUMO-2/3 modifications seem to be involved specifically in the stress response. SUMO-1 and SUMO-2/3 can form mixed chains, however, because SUMO-1 does not contain the internal SUMO consensus sites found in SUMO-2/3, it is thought to terminate these poly-SUMO chains. Serine 2 of SUMO-1 is phosphorylated, raising the concept of a 'modified modifier'. DNA damage response Cellular DNA is regularly exposed to DNA damaging agents. A DNA damage response (DDR) that is well regulated and intricate is usually employed to deal with the potential deleterious effects of the damage. When DNA damage occurs, SUMO protein has been shown to act as a molecular glue to facilitate the assembly of large protein complexes in repair foci. Also, SUMOylation can alter a protein's biochemical activities and interactions. SUMOylation plays a role in the major DNA repair pathways of base excision repair, nucleotide excision repair, non-homologous end joining and homologous recombinational repair. SUMOylation also facilitates error prone translation synthesis. Structure SUMO proteins are small; most are around 100 amino acids in length and 12 kDa in mass. The exact length and mass varies between SUMO family members and depends on which organism the protein comes from. Although SUMO has very little sequence identity with ubiquitin (less than 20%) at the amino acid level, it has a nearly identical structural fold. SUMO protein has a unique N-terminal extension of 10-25 amino acids which other ubiquitin-like proteins do not have. This N-terminal is found related to the formation of SUMO chains. The structure of human SUMO1 is depicted on the right. It shows SUMO1 as a globular protein with both ends of the amino acid chain (shown in red and blue) sticking out of the protein's centre. The spherical core consists of an alpha helix and a beta sheet. The diagrams shown are based on an NMR analysis of the protein in solution. Prediction of SUMO attachment Most SUMO-modified proteins contain the tetrapeptide consensus motif Ψ-K-x-D/E where Ψ is a hydrophobic residue, K is the lysine conjugated to SUMO, x is any amino acid (aa), D or E is an acidic residue. Substrate specificity appears to be derived directly from Ubc9 and the respective substrate motif. Currently available prediction programs are: SUMOplot - online free access software developed to predict the probability for the SUMO consensus sequence (SUMO-CS) to be engaged in SUMO attachment. The SUMOplot score system is based on two criteria: 1) direct amino acid match to the SUMO-CS observed and shown to bind Ubc9, and 2) substitution of the consensus amino acid residues with amino acid residues exhibiting similar hydrophobicity. SUMOplot has been used in the past to predict Ubc9 dependent sites. seeSUMO - uses random forests and support vector machines trained on the data collected from the literature SUMOsp - uses PSSM to score potential SUMOylation peptide sites. It can predict sites followed the ψKXE motif and unusual SUMOylation sites contained other non-canonical motifs. JASSA - online free access predictor of SUMOylation sites (classical and inverted consensus) and SIMs (SUMO interacting motif). JASSA uses a scoring system based on a Position Frequency Matrix derived from the alignment of experimental SUMOylation sites or SIMs. Novel features were implemented towards a better evaluation of the prediction, including identification of database hits matching the query sequence and representation of candidate sites within the secondary structural elements and/or the 3D fold of the protein of interest, retrievable from deposited PDB files. SumoPred-PLM or SUMOylation site Prediction using Protein Language Model - An AI deep learning utility to predict based on known biological rules around SUMO2 and SUMO3 binding in human proteins incorporating knowledge from a separate pretrained PLM tool developed previously in 2021 by Elnaggar et al. known as ProtT5-XL-UniRef50. Such collaboration between multidisciplinary AI tools is becoming common practice. SUMO attachment (SUMOylation) SUMO attachment to its target is similar to that of ubiquitin (as it is for the other ubiquitin-like proteins such as NEDD 8). The SUMO precursor has some extra amino acids that need to be removed, therefore a C-terminal peptide is cleaved from the SUMO precursor by a protease (in human these are the SENP proteases or Ulp1 in yeast) to reveal a di-glycine motif. The obtained SUMO then becomes bound to an E1 enzyme (SUMO Activating Enzyme (SAE)) which is a heterodimer (subunits SAE1 and SAE2). It is then passed to an E2, which is a conjugating enzyme (Ubc9). Finally, one of a small number of E3 ligating proteins attaches it to the protein. In budding yeast, there are four SUMO E3 proteins, Cst9, Mms21, Siz1 and Siz2. While in ubiquitination an E3 is essential to add ubiquitin to its target, evidence suggests that the E2 is sufficient in SUMOylation as long as the consensus sequence is present. It is thought that the E3 ligase promotes the efficiency of SUMOylation and in some cases has been shown to direct SUMO conjugation onto non-consensus motifs. E3 enzymes can be largely classed into PIAS proteins, such as Mms21 (a member of the Smc5/6 complex) and Pias-gamma and HECT proteins. On Chromosome 17 of the human genome, SUMO2 is near SUMO1+E1/E2 and SUMO2+E1/E2, among various others. Some E3's, such as RanBP2, however, are neither. Recent evidence has shown that PIAS-gamma is required for the SUMOylation of the transcription factor yy1 but it is independent of the zinc-RING finger (identified as the functional domain of the E3 ligases). SUMOylation is reversible and is removed from targets by specific SUMO proteases. In budding yeast, the Ulp1 SUMO protease is found bound at the nuclear pore, whereas Ulp2 is nucleoplasmic. The distinct subnuclear localisation of deSUMOylating enzymes is conserved in higher eukaryotes. DeSUMOylation SUMO can be removed from its substrate, which is called deSUMOylation. Specific proteases mediate this procedure (SENP in human or Ulp1 and Ulp2 in yeast). In yeast, SMT3 encodes the SUMO protein, and SUMO E3 ligase attaches SUMO to target proteins. In cell cycle regulation, the base case is that SUMO ligation is constantly taking place, leading to polySUMOylation of eligible target proteins. This is countered by the SUMO protease Ulp2 which cleaves polySUMO groups, leaving the protein in a monoSUMOylated state. As shown in the Biorender figure, there is a feedback mechanism in which ULP2 maintains the monoSUMOylated state by passively and diligently cleaving SUMO such that the polySUMOyated state is never stabilized enough to be acted upon by downstream actors. This deSUMOylation is critical to prevent precocious advancement of the cell cycle as discussed in several studies. The deSUMOylation may be arrested by the inhibitory phosphorylation of the Ulp2 SUMO protease by the Polo-like kinase Cdc5. By inhibiting the deSUMOylation of Ulp2, polySUMOylation is then promoted as the new stable state of target proteins, which are often but not always bound to other proteins in order to regulate major changes within the cell. Cdc5 is countered by the Rts1-PP2A phosphatase, which maintains the active state of the Ulp2 SUMO protease by removing the phosphate group added by Cdc5 kinase. The consequence of disrupting the counteracting deSUMOylation is the following: First, the targeted protein becomes polySUMOylated. Second, SUMO Targeted Ubiquitin Ligase, or STUbL, (SLX5 or SLX8 in the case of yeast) may then bind the polySUMOylated target and attach Ubiquitin groups (often polyUbiquitinating the already polySUMOylated protein). Third, segregases such as Cdc48 may then dissociate the SUMOylated and ubiquitinated target from its bound protein. Fourth, while the unbound protein it had been bound to is now free to do what it could not do while bound, the dissociated protein may then be degraded by the canonical Ubiquitin-Proteasome pathway. As studied with budding yeast, in the case of Tof2-Cdc14, Cdc14 release from the nucleolus allows the Mitotic Exit Network to commence, but it is regulated by the binding of Tof2, a protein subject to SUMOylation. Likewise, the Cohesin protein which binds sister chromatids in metaphase is able to be targeted by SUMOylation to allow the Cdc48 segregase to separate Cohesin and allow sister chromatid separation in early anaphase. In research as is often the case, scientists test drugs known to have significant effects on living systems; one such example is Rapamycin (known in pharmaceuticals as Sirolimus), the well-known inhibitor of mechanistic Target of Rapamycin, or mTOR. With respect to SUMOylation, Rapamycin may be thought of as having a "Sledge Hammer" effect, in which the drug promotes cellular autophagy, part of which includes broad-spectrum promotion of nonspecific SUMOylation for many proteins. This may be beneficial in some circumstances as it supports the breakdown of accumulated waste products. The importance of these studies in models such as yeast lies in their potential to inform scientists in the research and development of precise biomedical interventions that can translate to the improvement of human health in an array of clinical aspects. Role in Human Pathology SUMO protein is implicated in the etiology of many biomedical disease states not limited to: cancer, atherosclerosis, cardiovascular disease, neurodegenerative disease, diabetes, liver disease, intestinal disorders, and even infectious disease. In the case of the well-studied cancer tumor suppressor known as p53, there is a regulatory ubiquitin ligase protein in humans called Mouse Double Minute 2 protein, or MDM2, which acts to remove p53 from the cell. MDM2 regulates itself through self-ubiquitination by way of a RING finger domain, targeting itself for proteasomal destruction. When it is SUMOylated at the RING finger domain, MDM2 no longer limits its own function in the cell. When protected from itself, it likewise ubiquitinates p53, marking the protective p53 for destruction instead, whose absence is understood to promote cancer. Here again, the base case is SUMOylation, which is actively being undone by newly discovered SUMO protease SUSP4 and also by the SUMO protease interaction of SMT3IP1/SENP3 which is understood to deSUMOylate both MDM2 and p53. One of the ways p53 functions is as a DNA-binding tetramer; interestingly, SUMOylation of p53 delocalizes it from the nucleus, which prevents such activity. The critical nature of p53 cannot be overstated: in fact, if a human carries only one non-functioning copy of p53, it results in a deadly cancer prognosis known as Li-Fraumeni syndrome. Beyond p53, in cancer, many oncogenes and tumor suppressors have been discovered to be SUMOylated in order for the cancer to progress or not, with each SUMOylation event having one of a variety of effects. When IκB is SUMOylated, the SUMO post-translational modification outcompetes ubiquitination, protecting it from degradation, and by extension, the transcription factor NF-κB is bound in a complex with IκB, preventing the expression of genes that may otherwise cause cells with DNA damage to apoptose. In hypoxic conditions as arise in some cancers, HIF-1α, which is usually SUMOylated followed by subsequent ubiquitination and degradation through the von Hippel-Lindau tumor suppressor's ubiquitin ligase activity, is instead deSUMOylated thereby promoting survival of the tumorigenic cells. The fallout from deSUMOylation of HIF-1α includes promotion of MMPs which are understood to contribute to the progression of EMT, a hallmark of cancer. In atherosclerosis, both p53 and ERK5 are SUMOylated by the stimulus of disturbed blood flow. The stimulus is transduced by the activation of a serine/threonine kinsase called p90RSK, which phosphorylates the human SUMO protease SENP2 at the throenine amino acid residue 368. That phosphorylation is sufficient for the delocalization of the SENP2 from the nucleus. The effects of this phosphorylation-dependent SENP2 inhibition by nuclear export include the SUMOylation of p53 which leads to endothelial cell apoptosis, and SUMOylation of ERK5 which leads to inflammation. Nuclear export of SENP2 additionally downregulates endothelial nitric oxide synthase, eNOS while it upregulates inflammatory adhesion molecules. As eNOS is required for healthy vascular physiology, pathological oxidative stress ensues in vascular endothelial cells. With the oxidative stress comes subsequent accumulation of cellular lipids; this results in the inflammatory foamy cell state that is typified by atherosclerosis as well as the similarly inflammatory myelin-laden macrophages known to produce chronic inflammation in SCI. In cardiovascular disease, many proteins are subject to SUMOylation. To say SUMOylation itself is bad or good regarding this or any other class of disease is to overlook the role of the multiple proteins in question. One common denominator among many conditions is fibrosis; in myocardial fibrosis, PPARγ1 is understood to have a role in regulating expression of some key genes, and its transcriptional activity is generally inhibited by SUMOylation. Therefore, one possible therapeutic intervention in the case of cardiac hypertrophy may be countering the SUMOylation of PPARγ1. In neurodegenerative disease, we often observe pathological accumulation of proteins. Inclusion bodies form when for example, the Huntington's disease protein, aptly named Huntingtin, accumulates and folds into a form which is impervious to the proteasome. In Huntington's disease, sufficient SUMOylation of the anomalous Huntingtin protein prior to such refolding could perhaps delay the progression of the disease state by enabling timely destruction of the protein while the polypeptide chains are still accessible to the protease subunits within the proteasome. Other accumulating proteins which threaten neurodegenerative disorders include α-synuclein (associated with Parkinson's) and Amyloid β (associated with Alzheimer's), and if acted upon early enough, disease could perhaps be better mitigated. Human SUMO proteins SUMO1 SUMO2 SUMO3 SUMO4 NSMCE2 Role in protein purification Recombinant proteins expressed in E. coli may fail to fold properly, instead forming aggregates and precipitating as inclusion bodies. This insolubility may be due to the presence of codons read inefficiently by E. coli, differences in eukaryotic and prokaryotic ribosomes, or lack of appropriate molecular chaperones for proper protein folding. In order to purify such proteins it may be necessary to fuse the protein of interest with a solubility tag such as SUMO or MBP (maltose-binding protein) to increase the protein's solubility. SUMO can later be cleaved from the protein of interest using a SUMO-specific protease such as Ulp1 peptidase. See also Ubiquitin Prokaryotic ubiquitin-like protein References Further reading External links SUMO1 homology group from HomoloGene human SUMO proteins on ExPASy: SUMO1 SUMO2 SUMO3 SUMO4 Programs for prediction SUMOylation: SUMOplot Analysis Program — predicts and scores SUMOylation sites in your protein (by Abgent) seeSUMO - prediction of SUMOylation sites SUMOsp - prediction of SUMOylation sites JASSA - Predicts and scores SUMOylation sites and SIM (SUMO interacting motif) Research laboratories Post-translational modification Proteins
SUMO protein
[ "Chemistry" ]
4,287
[ "Biomolecules by chemical classification", "Gene expression", "Biochemical reactions", "Post-translational modification", "Molecular biology", "Proteins" ]
15,007,619
https://en.wikipedia.org/wiki/Scotford%20Upgrader
The Shell Scotford Upgrader is an oilsand upgrader, a facility which processes crude bitumen from oil sands into a wide range of synthetic crude oils. The upgrader is owned by Athabasca Oil Sands Project (AOSP), a joint venture of Shell Canada Energy (60%), Marathon Oil Sands L.P. (20%) and Chevron Canada Limited (20%). The facility is located in the industrial development of Scotford, just to the northeast of Fort Saskatchewan, Alberta in the Edmonton Capital Region. Site The Scotford Upgrader is a part of a larger site known as Shell Scotford located 40 km northeast of Edmonton, Alberta. Shell Scotford comprises three operating units: the Upgrader, a Refinery, and a Chemical plant. The Scotford Cogeneration Plant is also located on the site. Currently, work is being done on the first Upgrader expansion. In 1984, Shell opened both the Refinery and Chemical plant on the Scotford site. As one of North America's most modern and efficient refineries, the Scotford Refinery was the first to exclusively process synthetic crude oil from Alberta’s oil sands. Benzene that is produced during the refining process is sent to the adjacent Chemical plant and is used in the production of Styrene Monomer, a chemical needed to make many of the hard plastics people use daily. In 2000, a glycol plant was opened at Scotford Chemicals. Much of the output of the Scotford Upgrader is sold to the Scotford Refinery. Both light and heavy crudes are also sold to Shell's Sarnia Refinery in Ontario. The rest of the synthetic crude is sold to the general marketplace and shipped by pipeline. History of Scotford In 1891, a group of immigrants from Galicia, Austria settled on the land south of the North Saskatchewan River, near the South Victoria Trail. Philip Krebs, along with his son John, settled on the north side of South Victoria Trail. Their home became a popular stopping place for those travelling along the trail. Besides being a hospitable natured man, John was fluent in four European languages (German, English, Polish, and Ukrainian) and could speak Cree - making him popular with those who stopped by. When the Canadian Northern Railway was being built into Fort Saskatchewan, Philip Krebs’ homestead was a natural place for a stop. In 1905, a loading station was erected there, and on the siding of the building was the name “Scotford” (named after Walter Scott and Alexander Rutherford, the premiers of the two provinces – Saskatchewan and Alberta - that were formed that same year). The area is still referred to by that name. Description The Scotford Upgrader has a rated processing capacity of . It was shut down after being damaged in a fire 19 November 2007. The production was resumed in December 2007. The facility uses hydrogen addition to convert the bitumen from CNRL's Muskeg River Mine in the Athabasca oil sands into refinery-ready sweet, light crude oil. The Muskeg River Mine is the first commercial unit using Shell's Enhance froth treatment technology — a process for removing sand, fine clay and water from oil sands froth to make clean bitumen suitable for upgrading via hydrogen addition. According to Shell, the hydrogenation process is well suited to the very clean bitumen produced at the Muskeg River Mine, and results in the upgrader producing more light crude oil than it inputs in the form of heavy bitumen. It also produces lower levels of sulfur dioxide emissions than the alternative coking method which removes carbon to produce petroleum coke as a by-product. The Scotford Upgrader has its own hydrogen manufacturing unit and produces most of the hydrogen required for the hydrogen-addition process. The Scotford Upgrader capacity was expanded by in March 2010, an increase of 60% in capacity. In May 2007, the US$9 billion to US$11.3 billion expansion contract was awarded to TIC, Bantrel Constructors, PCL & KBR. KBR built 160 modules and performed construction work for the Atmospheric and Vacuum (A&V) unit and Sulphur Recovery Unit (SRU). Bantrel completed the tank farm, Utilities, Waterblock and Flare units, PCL completed the Residue Hydroconversion Complex (RHC) and TIC constructed the Hydrogen Manufacturing Unit (HMU) . See also Alberta's Industrial Heartland Albian Sands Husky Lloydminster Refinery, Lloydminster (Husky Energy), Scotford Refinery, Strathcona County (Shell Canada), Strathcona Refinery, Strathcona County (Imperial Oil), Sturgeon Refinery, Sturgeon County (North West Redwater Partnership — Canadian Natural Resources and North West Refineries), Suncor Edmonton Refinery, Strathcona County (Suncor Energy), References External links [www.bechtel.com/assets/files/PDF/DetailDesign.pdf Upgrader Design Details] Scotford Upgrader (Shell Canada website) Scotford Complex (Shell Canada website) Muskeg River Mine (Shell Canada website) (Oil Sands Magazine) Petroleum technology Bituminous sands of Canada Petroleum industry in Alberta Petroleum industry in Canada Oil refineries in Alberta Buildings and structures in Alberta Shell plc buildings and structures Strathcona County
Scotford Upgrader
[ "Chemistry", "Engineering" ]
1,104
[ "Petroleum engineering", "Petroleum technology" ]
15,008,505
https://en.wikipedia.org/wiki/Pattern%20Languages%20of%20Programs
Pattern Languages of Programs is a group of annual conferences sponsored by The Hillside Group. The purpose of these conferences is to develop and refine the art of software design patterns. Most of the effort focuses on developing a textual presentation of a pattern such that it becomes easy to understand and apply. This is typically done in a writers' workshop setting. The flagship conference The flagship conference is called the Pattern Languages of Programs conference, abbreviated as PLoP. PLoP has been held in the U.S.A. since 1994. Until 2004 it was held annually at Allerton Park in Monticello, Illinois, a property of the University of Illinois at Urbana Champaign. Since then, its location has alternated between Allerton park and being co-located with OOPSLA, a large computer science conference, with the Agile Conference in 2009, and with PUARL in 2018. The 27th PLoP will be held in Keystone, Colorado. Notable people who chaired the conference in the past include Ward Cunningham, Richard Gabriel, Ralph Johnson, John Vlissides and Kent Beck. PLoP (and several other Pattern Languages of Programs conferences) are sponsored by The Hillside Group, a U.S.-based non-profit organization that holds the PLoP trademark and the rights to the conference. Other PLoP conferences AsianPLoP AsianPLoP is the PLoP event for the Asian community, commonly featuring patterns in both English and Japanese language. ChiliPLoP ChiliPLoP is an annual conference featuring "hot topics" of the PLoP community. It is held in the U.S. since 1998. EuroPLoP Held since 1996 in Kloster Irsee, Germany (former monastery, now Swabian Conference and Education Centre). KoalaPLoP Held in Australia or New Zealand. MensorePLoP MensorePLoP '2001, held on the island of Okinawa, Japan. MiniPLoP MiniPLoP'2011, held in IME/USP, São Paulo, Brazil. ScrumPLoP SugarLoafPLoP SugarLoafPLoP, held in Brazil. VikingPLoP VikingPLoP, held mostly in the Scandinavian countries, but also moving around in Europe. Publications The conference proceedings are typically published locally as technical reports of a sponsoring university. From 1998 to 2007, EuroPLoP papers were published annually by the German publisher Universitätsverlag Konstanz. Between 2008 and 2012 proceedings appeared in several places. CEUR-WS holds papers for 2008 and papers for 2009 (in addition a complete set of 2009 papers are available from Lulu.com in printed and PDF formats). A printed version of EuroPLoP 2012 papers are also available on Lulu.com. Since 2012 a subset of EuroPLoP papers have been submitted to the ACM Digital Library. After the conference, authors are given the chance to submit a revised paper for publication in the book series Pattern Languages of Program Design by Addison Wesley. In 2007, an academic journal was started, called Transactions on Pattern Languages of Programming. The editors-in-chief are James Noble and Ralph Johnson and the European editor is Uwe Zdun. The journal is published by Springer-Verlag. See also Pattern language where the name and concept arose from References External links The homepage of the Pattern Languages of Programs conferences, organized by the Hillside Group The LinkedIn group for PLoP. The homepage of the European Pattern Languages of Programs conference, organized by Hillside Europe Springer Verlag's homepage for the Transactions on Pattern Languages of Programming journal Ward's wiki HistoryOfPatterns including how PLoP came about Software engineering conferences
Pattern Languages of Programs
[ "Engineering" ]
735
[ "Software engineering", "Software engineering conferences" ]
15,019,690
https://en.wikipedia.org/wiki/Baskakov%20operator
In functional analysis, a branch of mathematics, the Baskakov operators are generalizations of Bernstein polynomials, Szász–Mirakyan operators, and Lupas operators. They are defined by where ( can be ), , and is a sequence of functions defined on that have the following properties for all : . Alternatively, has a Taylor series on . is completely monotone, i.e. . There is an integer such that whenever They are named after V. A. Baskakov, who studied their convergence to bounded, continuous functions. Basic results The Baskakov operators are linear and positive. References Footnotes Approximation theory
Baskakov operator
[ "Mathematics" ]
129
[ "Mathematical analysis", "Mathematical analysis stubs", "Approximation theory", "Mathematical relations", "Approximations" ]
1,692,055
https://en.wikipedia.org/wiki/Oxygen%20difluoride
Oxygen difluoride is a chemical compound with the formula . As predicted by VSEPR theory, the molecule adopts a bent molecular geometry. It is a strong oxidizer and has attracted attention in rocketry for this reason. With a boiling point of −144.75 °C, OF2 is the most volatile (isolable) triatomic compound. The compound is one of many known oxygen fluorides. Preparation Oxygen difluoride was first reported in 1929; it was obtained by the electrolysis of molten potassium fluoride and hydrofluoric acid containing small quantities of water. The modern preparation entails the reaction of fluorine with a dilute aqueous solution of sodium hydroxide, with sodium fluoride as a side-product: Structure and bonding It is a covalently bonded molecule with a bent molecular geometry and a F-O-F bond angle of 103 degrees. Its powerful oxidizing properties are suggested by the oxidation number of +2 for the oxygen atom instead of its normal −2. Reactions Above 200 °C, decomposes to oxygen and fluorine by a radical mechanism. reacts with many metals to yield oxides and fluorides. Nonmetals also react: phosphorus reacts with to form and ; sulfur gives and ; and unusually for a noble gas, xenon reacts (at elevated temperatures) yielding and xenon oxyfluorides. Oxygen difluoride reacts with water to form hydrofluoric acid: It can oxidize sulphur dioxide to sulfur trioxide and elemental fluorine: However, in the presence of UV radiation, the products are sulfuryl fluoride () and pyrosulfuryl fluoride (): Safety Oxygen difluoride is considered an unsafe gas due to its oxidizing properties. It reacts explosively with water. Hydrofluoric acid produced by the hydrolysis of with water is highly corrosive and toxic, capable of causing necrosis, leaching calcium from the bones and causing cardiovascular damage, among a host of other highly toxic effects. Other acute poisoning effects include: pulmonary edema, bleeding lungs, headaches, etc. Chronic exposure to oxygen difluoride, like that of other chemicals that release fluoride ions, can lead to fluorosis and other symptoms of chronic fluoride poisoning. Oxygen difluoride may be associated with kidney damage. The maximum workplace exposure limit is 0.05 ppm. Popular culture In Robert L. Forward's science fiction novel Camelot 30K, oxygen difluoride was used as a biochemical solvent by fictional life forms living in the solar system's Kuiper belt. While would be a solid at 30K, the fictional alien lifeforms were described as endothermic, maintaining elevated body temperatures and liquid blood by radiothermal heating. Notes References External links National Pollutant Inventory - Fluoride and compounds fact sheet WebBook page for CDC - NIOSH Pocket Guide to Chemical Hazards Oxygen fluorides Rocket oxidizers
Oxygen difluoride
[ "Chemistry" ]
630
[ "Oxygen fluorides", "Rocket oxidizers", "Oxidizing agents" ]
1,692,706
https://en.wikipedia.org/wiki/Generation%20of%20Animals
The Generation of Animals (or On the Generation of Animals; Greek: Περὶ ζῴων γενέσεως (Peri Zoion Geneseos); Latin: De Generatione Animalium) is one of the biological works of the Corpus Aristotelicum, the collection of texts traditionally attributed to Aristotle (384–322 BC). The work provides an account of animal reproduction, gestation and heredity. Content Generation of Animals consists of five books, which are themselves split into varying numbers of chapters. Most editions of this work categorise it with Bekker numbers. In general, each book covers a range of related topics, however there is also a significant amount of overlap in the content of the books. For example, while one of the two principal topics covered in book I is the function of semen (gone, sperma), this account is not finalised until partway through book II. Book I (715a – 731b) Chapter 1 begins with Aristotle claiming to have already addressed the parts of animals, referencing the author's work of the same name. While this and possibly his other biological works, have addressed three of the four causes pertaining to animals, the final, formal, and material, the efficient cause has yet to be spoken of. He argues that the efficient cause, or "that from which the source of movement comes" can be addressed with an inquiry into the generation of animals. Aristotle then provides a general overview of the processes of reproduction adopted by the various genera, for instance most 'blooded' animals reproduce by coition of a male and female of the same species, but cases vary for 'bloodless' animals. The reproductive organs of males and females are also investigated. Through chapters 2–5 Aristotle successively describes the general reproductive features common to each sex, the differences in reproductive parts among blooded animals, the causes of differences of testes in particular, and why some animals do not have external reproductive organs. The latter provides clear examples of Aristotle's teleological approach to causation, as it is applied to biology. He argues that the male hedgehog has its testes near its loin, unlike the majority of vivipara, because due to their spines hedgehogs mate standing upright. The hedgehog's form is that of an animal able to use its spines for self-defence, and so its reproductive organs are situated in such a way as to complement this. Chapter 6 describes why fish and serpents copulate in a short space of time, and chapter 7 provides an explanation for why serpents intertwine during coition. Chapters 8–11 focus on female reproductive organs, and in particular the differences in viviparous and oviparous production of young, and the differing states of the eggs produced by ovipara. This is continued in chapters 12 and 13, where Aristotle discusses the reasons the uterus is internal and the testes external, and their locations among various species. Concluding this section on the reproductive parts of animals is an overview from chapters 14–16 of the generative faculties of crustacea, cephalopods, and insects. This section contains an admission of an observational uncertainty, with Aristote stating that observations of insect coition are not yet detailed enough to classify into types. The remainder of Book I (chapters 17 – 23) is concerned with providing an account of semen and its contribution to the generative process. The primary conclusions reached in this section are, firstly, that semen is not a bodily waste product, but "a residue of useful nutriment", and that because the bodily emissions produced by females during copulation are not of a similar nutritive character, semen must be the efficient cause of offspring. Book II (731b – 749a) Chapters 1–3 of Book II continue the discussion of semen from the end of Book I. As a result of questioning potential ways in which the particular parts of animals might come to be formed, such as semen containing small versions of the bodily organs, before settling on the idea that semen contributes the potential (dunamis) for the parts to come into being as they are. This is the basis for the imparting of the soul upon the material substratum present in the egg, as the female reproductive residue itself contains no active principle for the motion required to form an embryo. Aristotle's conception of the soul should not be mistaken for one which takes the soul to be a non-physical substance separate to the body. It instead comprises the ability for some function to be performed, which in the case of bodily development means the ability for organs to perform their bodily functions. Scholar Devin Henry describes Aristotle's view as follows:"Aristotelian souls are not the sorts of things that are capable of being implanted in bodily organs from without (except perhaps intellectual soul). Soul is not an extra ingredient added to the organ over-and-above its structure. Once there is a properly constructed organ it straightaway possess the corresponding soul-function in virtue of its structure."The generative capacity of semen in imparting the soul is its heat, with semen itself being "a compound of breath and water". It is the component of breath (pneuma) that shapes the material provided by the female into the correct form. The mechanics of the development of the embryo take up much of chapters 4–7, with Aristotle addressing first the different stages of development at which vivipara and ovipara expel their young. In chapter 5 the theory of soul-imparting is amended slightly, as observations of wind-eggs show that the female, unassisted, is able to impart the nutritive aspect of the soul, which Aristotle claims is its lowest portion. Chapter 6 addresses the order in which the parts of an embryo come about, and in chapter 7 Aristotle argues that, contrary to what Democritus apparently thought, that "children are nourished in the uterus by sucking some lump of flesh", in actuality unborn vivipara are nourished by the umbilical cord. Chapter 8 discusses cross-breeding of species, and the sterility of mules. Book III (749a – 763b) Book III covers non-viviparous embryonic development. The first four chapters provide a description and explanation of eggs, while in chapters 5–7 Aristotle responds to other ideas about eggs and some observational difficulties in providing an empirical account of all eggs. The final chapters cover the development of hitherto unmentioned animals. Chapter 1 is on the subject of bird eggs, with Aristotle providing explanations for why different birds produce different amounts of eggs, why some birds produce wind-eggs, and why bird eggs are sometimes of two colours. Following an explication of the formation of eggs and how they provide nutrition for the embryo in chapter 2, in chapter 3 Aristotle compares the eggs of birds against those of fish. The descriptive account of eggs is completed in chapter 4, which describes the growth of some eggs after they have been laid. Chapters 5 and 6 are a response to what Aristotle takes to be falsely-held beliefs of other scientists concerning the process of procreation. For example, Anaxagoras apparently held that weasels give birth from their mouths because "the young of the weasel are very small like those of the other fissipeds, of which we shall speak later, and because they often carry the young about in their mouths. Aristotle states instead that weasels have the same uteruses as other quadrupeds, and there is nothing to connect the uterus to the mouth, so such a claim as Anaxagoras' must be unfounded. Chapters 7–10 cover the generative processes of selachians, cephalopods, crustacea, insects and bees, in successive order. Chapter 11 concerns the generation of testacea, which are said to generate spontaneously. While it is possible for some of the Testacea, such as mussels, to emit a liquid slime which can form others of the same kind, they are also formed "in connexion with putrefaction and admixture of rain-water." Book IV (763b – 778a) Book IV is primarily on the topic of biological inheritance. Aristotle is concerned with both the similarities between the offspring and parents and the differences that can arise within a particular species as a result of the generative process. Chapters 1 is an account of the origin of the sexes. Aristotle considers the sexes to be "the first principles of all living things". Given this, the sex of an embryo is determined entirely by the potency of the fertilising semen, which contains the male principle. If this semen lacks heat in fashioning the material present in the female then the male principle cannot take hold, and therefore its opposite principle must take hold. In chapter two Aristotle provides pieces of observational evidence for this, including the following:"Again, more males are born if copulation takes place when north than when south winds are blowing; for animals' bodies are more liquid when the wind is in the south, so that they produce more residue – and more residue is harder to concoct; hence the semen of the males is more liquid and so is the discharge of the menstrual fluids in women."In chapter 3 Aristotle provides the primary elements of his theory of inheritance and resemblances. Utilising the account of the function of semen from Book II Aristotle describes how the movement of semen upon the proto-embryonic material gives rise to particular traits inherited from one's ancestors. Semen contains the general male principle, and contains in addition that of the particular male whose semen it is, so Socrates' semen will contain his particular genetic traits. In fashioning the material the semen imparts, or does not impart, genetic traits in the same way as the determination of sex, where a resemblance to the father will be imparted onto the material if the semen is of a suitable temperature, provided the male principle has established the sex as male. If instead the male principle was hot enough to be imparted but not that of the particular male, Socrates, was not then the movement may either put forth a resemblance to the mother, or it could relapse into that of the father of the father or some other non-immediate ancestor. Chapter 4 develops this theory for the cases of deformities, and why different animals produce different amounts of offspring. The former is due to malformed reproductive material present in the female, and for the latter it is particular relations of the size of the animal, the moisture of reproductive materials, and the heat of semen. Chapter 5 presents the causes of superfetation, which is an inadequate separation of multiple young during gestation. Chapters 6 and 7 focus on the causes of other birth defects, and why males are allegedly more likely to suffer from defects. Chapters 8–10 concern the production of milk, why animals are born headfirst, and on the length of gestation being proportional to the length of life, respectively. Book V (778a – 789b) Aristotle takes Book V to be an investigation of  "the qualities by which the parts of animals differ." The subjects addressed by this book are a miscellaneous range of animal parts, such as eye colour (chapter 1), body hair (chapter 3) and the pitch of the voice (chapter 7). The apparent lack of a single causal scheme or subject matter for these discrete topics has led to disagreement in how this book relates to the rest of the Generation of Animals. Some scholars take the Book only to be concerned only with material causes of intra-species differences that arise later in development, in contrast with the earlier books' systematic use of teleology. Others have suggested that Book V does utilise causation other than material to a considerable extent. References Works Cited Aristotle. 'Generation of Animals'. In The Complete Works of Aristotle, edited by Jonathan Barnes, translated by A. Platt, 6. print., with Corr., 1:pp. 1111–1218. Bollingen Series, 71,2. Princeton, N.J: Princeton Univ. Pr, 1995. Henry, Devin. 'Generation of Animals'. In A Companion to Aristotle, edited by Georgios Anagnostopoulos, pp. 368–84. Blackwell Companions to Philosophy 42. Chichester, U.K. ; Malden, MA: Wiley-Blackwell, 2009. Henry, Devin. . 'How Sexist Is Aristotle's Developmental Biology?' Phronesis 52, no. 3 (2007): pp. 251–69. Katayama, Errol G. Aristotle on Artifacts: A Metaphysical Puzzle. SUNY Series in Ancient Greek Philosophy. Albany, NY: State University of New York Press, 1999. Lennox, James G. 'Aristotle's Biology'. Stanford Encyclopedia of Philosophy, 16 July 2021. Nielsen, Karen. 'The Private Parts of Animals: Aristotle on the Teleology of Sexual Difference'. Phronesis 53, no. 4–5 (2008): pp. 373–405. Tuana, Nancy. 'The Weaker Seed. The Sexist Bias of Reproductive Theory'. Hypatia 3, no. 1 (1988): pp. 35–59. External links Generation of Animals by Aristotle translated by Arthur Platt Works by Aristotle Genetics Genetics books
Generation of Animals
[ "Biology" ]
2,789
[ "Genetics" ]
1,693,196
https://en.wikipedia.org/wiki/Simmons%E2%80%93Smith%20reaction
The Simmons–Smith reaction is an organic cheletropic reaction involving an organozinc carbenoid that reacts with an alkene (or alkyne) to form a cyclopropane. It is named after Howard Ensign Simmons, Jr. and Ronald D. Smith. It uses a methylene free radical intermediate that is delivered to both carbons of the alkene simultaneously, therefore the configuration of the double bond is preserved in the product and the reaction is stereospecific. Mechanism Examples Thus, cyclohexene, diiodomethane, and a zinc-copper couple (as iodomethylzinc iodide, ICH2ZnI) yield norcarane (bicyclo[4.1.0]heptane). The Simmons–Smith reaction is generally preferred over other methods of cyclopropanation, however it can be expensive due to the high cost of diiodomethane. Modifications involving cheaper alternatives have been developed, such as dibromomethane or diazomethane and zinc iodide. The reactivity of the system can also be increased by using the Furukawa modification, exchanging the zinc‑copper couple for diethylzinc. The Simmons–Smith reaction is generally subject to steric effects, and thus cyclopropanation usually takes place on the less hindered face. However, when a hydroxy substituent is present in the substrate in proximity to the double bond, the zinc coordinates with the hydroxy substituent, directing cyclopropanation cis to the hydroxyl group (which may not correspond to cyclopropanation of the sterically most accessible face of the double bond): An interactive 3D model of this reaction can be seen at ChemTube3D. Asymmetric Simmons–Smith reaction Although asymmetric cyclopropanation methods based on diazo compounds (the Metal-catalyzed cyclopropanations) exist since 1966, the asymmetric Simmons–Smith reaction was introduced in 1992 with a reaction of cinnamyl alcohol with diethylzinc, diiodomethane and a chiral disulfonamide in dichloromethane: The hydroxyl group is a prerequisite serving as an anchor for zinc. An interactive 3D model of a similar reaction can be seen here (java required). In another version of this reaction the ligand is based on salen and Lewis acid DIBAL is added: Scope and limitations Achiral alkenes The Simmons–Smith reaction can be used to cyclopropanate simple alkenes without complications. Unfunctionalized achiral alkenes are best cyclopropanated with the Furukawa modification (see below), using Et2Zn and CH2I2 in 1,2-dichloroethane. Cyclopropanation of alkenes activated by electron donating groups proceed rapidly and easily. For example, enol ethers like trimethylsilyloxy-substituted olefins are often used because of the high yields obtained. Despite the electron-withdrawing nature of halides, many vinyl halides are also easily cyclopropanated, yielding fluoro-, bromo-, and iodo-substituted cyclopropanes. The cyclopropanation of N-substituted alkenes is made complicated by N-alkylation as a competing pathway. This can be circumvented by adding a protecting group to nitrogen, however the addition of electron-withdrawing groups decreases the nucleophilicity of the alkene, lowering yield. The use of highly electrophilic reagents such as CHFI2, in place of CH2I2, has been shown to increase yield in these cases. Polyenes Without the presence of a directing group on the olefin, very little chemoselectivity is observed. However, an alkene which is significantly more nucleophilic than any others will be highly favored. For example, cyclopropanation occurs highly selectively at enol ethers. Functional group compatibility An important aspect of the Simmons–Smith reaction that contributes to its wide usage is its ability to be used in the presence of many functional groups. Among others, the haloalkylzinc-mediated reaction is compatible with alkynes, alcohols, ethers, aldehydes, ketones, carboxylic acids and derivatives, carbonates, sulfones, sulfonates, silanes, and stannanes. However, some side reactions are commonly observed. Most side reactions occur due to the Lewis-acidity of the byproduct, ZnI2. In reactions that produce acid-sensitive products, excess Et2Zn can be added to scavenge the ZnI2 that is formed, forming the less acidic EtZnI. The reaction can also be quenched with pyridine, which will scavenge ZnI2 and excess reagents. Methylation of heteroatoms is also observed in the Simmons–Smith reaction due to the electrophilicity of the zinc carbenoids. For example, the use of excess reagent for long reaction times almost always leads to the methylation of alcohols. Furthermore, Et2Zn and CH2I2 react with allylic thioethers to generate sulfur ylides, which can subsequently undergo a 2,3-sigmatropic rearrangement, and will not cyclopropanate an alkene in the same molecule unless excess Simmons–Smith reagent is used. Modifications The Simmons–Smith reaction is rarely used in it original form and a number of modifications to both the zinc reagent and carbenoid precursor have been developed and are more commonly employed. Furukawa modification The Furukawa modification involves the replacement of the zinc-copper couple with dialkyl zinc, the most active of which was found to be Et2Zn. The modification was proposed in 1968 as a way to turn cationically polymerizable olefins such as vinyl ethers into their respective cyclopropanes. It has also been found to be especially useful for the cyclopropanation of carbohydrates, being far more reproducible than other methods. Like the unmodified reaction, the Furukawa-modified reaction is stereospecific, and is often much faster than the unmodified reaction. However, the Et2Zn reagent is pyrophoric, and as such must be handled with care. Charette modification The Charette modification replaces the CH2I2 normally found in the Simmons–Smith reaction with aryldiazo compounds, such as phenyldiazomethane, in Pathway A. Upon treatment with stoichiometric amounts of zinc halide, an organozinc compound similar to the carbenoid discussed above is produced. This can react with almost all alkenes and alkynes, including styrenes and alcohols. This is especially useful, as the unmodified Simmons-Smith is known to deprotonate alcohols. Unfortunately, as in Pathway B shown the intermediate can also react with the starting diazo compound, giving cis- or trans- 1,2-diphenylethene. Additionally, the intermediate can react with alcohols to produce iodophenylmethane, which can further undergo an SN2 reaction to produce ROCHPh, as in Pathway C. Shi Modification The highly electrophilic nature of the zinc carbenoid reduces the useful scope of the Simmons-Smith cyclopropanation to electron-rich alkenes and those bearing pendant coordinating groups, most commonly alcohols. In 1998, the Shi group identified a novel zinc carbenoid formed from diethylzinc, trifluoroacetic acid and diiodomethane of the form CF3CO2ZnCH2I. This zinc carbenoid is far more nucleophilic and allows for reaction with unfunctionalized and electron-deficient alkenes, like vinyl boronates. A number of acidic modifiers have a similar effect, but trifluoroacetic acid is the most commonly used. The Shi modification of the cyclopropanation is also stereospecific. Further exploration of amino acids led to the development of an asymmetric variant of this cyclopropanation. Non-zinc reagents Although not commonly used, Simmons-Smith reagents that display similar reactive properties to those of zinc have been prepared from aluminum and samarium compounds in the presence of CH2IX. With the use of these reagents, allylic alcohols and isolated olefins can be selectively cyclopropanated in the presence of each other. Iodo- or chloro- methylsamarium iodide in THF is an excellent reagent to selectively cyclopropanate the allylic alcohol, presumably directed by chelation to the hydroxyl group. In contrast, use of dialkyl(iodomethyl)aluminum reagents in CH2Cl2 will selectively cyclopropanate the isolated olefin. The specificity of these reagents allow cyclopropanes to be placed in poly-unsaturated systems that zinc-based reagents will cyclopropanate fully and unselectively. For example, i-Bu3Al will cyclopropanate geraniol at the 6 position, while Sm/Hg, will cyclopropanate at the 2 position, as shown below. However, both reactions require near stoichiometric amounts of the starting metal compound, and Sm/Hg must be activated with the highly toxic HgCl2. Uses in synthesis Most modern applications of the Simmons–Smith reaction use the Furukawa modification. Especially relevant and reliable applications are listed below. Insertion to form γ-keto esters A Furukawa-modified Simmons-Smith generated cyclopropane intermediate is formed in the synthesis of γ-keto esters from β-keto esters. The Simmons-Smith reagent binds first to the carbonyl group and subsequently to the α-carbon of the pseudo-enol that the first reaction forms. This second reagent forms the cyclopropyl intermediate which rapidly fragments into the product. Formation of amido-spiro [2.2] pentanes from allenamides A Furukawa-modified Simmons–Smith reaction cyclopropanates both double bonds in an allenamide to form amido-spiro [2.2] cyclopentanes, featuring two cyclopropyl rings which share one carbon. The product of monocyclopropanation is also formed. Natural product synthesis Cyclopropanation reactions in natural products synthesis have been reviewed. The β-lactamase inhibitor Cilastatin provides an instructive example of Simmons-Smith reactivity in natural products synthesis. An allyl substituent on the starting material is Simmons-Smith cyclopropanated, and the carboxylic acid is subsequently deprotected via ozonolysis to form the precursor. Pharmaceutical Synthesis The Simmons–Smith reaction is used in the syntheses of GSK1360707F, ropanicant and Onglyza (Saxagliptan). References External links Simmons–Smith reaction at Organic Chemistry Portal Name reactions
Simmons–Smith reaction
[ "Chemistry" ]
2,426
[ "Name reactions", "Ring forming reactions", "Organic reactions" ]
1,693,865
https://en.wikipedia.org/wiki/Nose%20cone%20design
Given the problem of the aerodynamic design of the nose cone section of any vehicle or body meant to travel through a compressible fluid medium (such as a rocket or aircraft, missile, shell or bullet), an important problem is the determination of the nose cone geometrical shape for optimum performance. For many applications, such a task requires the definition of a solid of revolution shape that experiences minimal resistance to rapid motion through such a fluid medium. Nose cone shapes and equations General dimensions In all of the following nose cone shape equations, is the overall length of the nose cone and is the radius of the base of the nose cone. is the radius at any point , as varies from , at the tip of the nose cone, to . The equations define the two-dimensional profile of the nose shape. The full body of revolution of the nose cone is formed by rotating the profile around the centerline . While the equations describe the "perfect" shape, practical nose cones are often blunted or truncated for manufacturing, aerodynamic, or thermodynamic reasons. Conic A very common nose-cone shape is a simple cone. This shape is often chosen for its ease of manufacture. More optimal, streamlined shapes (described below) are often much more difficult to create. The sides of a conic profile are straight lines, so the diameter equation is simply: Cones are sometimes defined by their half angle, : and Spherically blunted conic In practical applications such as re-entry vehicles, a conical nose is often blunted by capping it with a segment of a sphere. The tangency point where the sphere meets the cone can be found, using similar triangles, from: where is the radius of the spherical nose cap. The center of the spherical nose cap, , can be found from: And the apex point, can be found from: Bi-conic A bi-conic nose cone shape is simply a cone with length stacked on top of a frustum of a cone (commonly known as a conical transition section shape) with length , where the base of the upper cone is equal in radius to the top radius of the smaller frustum with base radius . For : For : Half angles: and and Tangent ogive Next to a simple cone, the tangent ogive shape is the most familiar in hobby rocketry. The profile of this shape is formed by a segment of a circle such that the rocket body is tangent to the curve of the nose cone at its base, and the base is on the radius of the circle. The popularity of this shape is largely due to the ease of constructing its profile, as it is simply a circular section. The radius of the circle that forms the ogive is called the ogive radius, , and it is related to the length and base radius of the nose cone as expressed by the formula: The radius at any point , as varies from to is: The nose cone length, , must be less than or equal to . If they are equal, then the shape is a hemisphere. Spherically blunted tangent ogive A tangent ogive nose is often blunted by capping it with a segment of a sphere. The tangency point where the sphere meets the tangent ogive can be found from: where is the radius and is the center of the spherical nose cap. Secant ogive The profile of this shape is also formed by a segment of a circle, but the base of the shape is not on the radius of the circle defined by the ogive radius. The rocket body will not be tangent to the curve of the nose at its base. The ogive radius is not determined by and (as it is for a tangent ogive), but rather is one of the factors to be chosen to define the nose shape. If the chosen ogive radius of a secant ogive is greater than the ogive radius of a tangent ogive with the same and , then the resulting secant ogive appears as a tangent ogive with a portion of the base truncated. and Then the radius at any point as varies from to is: If the chosen is less than the tangent ogive and greater than half the length of the nose cone, then the result will be a secant ogive that bulges out to a maximum diameter that is greater than the base diameter. A classic example of this shape is the nose cone of the Honest John. Elliptical The profile of this shape is one-half of an ellipse, with the major axis being the centerline and the minor axis being the base of the nose cone. A rotation of a full ellipse about its major axis is called a prolate spheroid, so an elliptical nose shape would properly be known as a prolate hemispheroid. This shape is popular in subsonic flight (such as model rocketry) due to the blunt nose and tangent base. This is not a shape normally found in professional rocketry, which almost always flies at much higher velocities where other designs are more suitable. If equals , this is a hemisphere. Parabolic This nose shape is not the blunt shape that is envisioned when people commonly refer to a "parabolic" nose cone. The parabolic series nose shape is generated by rotating a segment of a parabola around a line parallel to its latus rectum. This construction is similar to that of the tangent ogive, except that a parabola is the defining shape rather than a circle. Just as it does on an ogive, this construction produces a nose shape with a sharp tip. For the blunt shape typically associated with a parabolic nose, see power series below. (The parabolic shape is also often confused with the elliptical shape.) For : can vary anywhere between and , but the most common values used for nose cone shapes are: For the case of the full parabola () the shape is tangent to the body at its base, and the base is on the axis of the parabola. Values of less than result in a slimmer shape, whose appearance is similar to that of the secant ogive. The shape is no longer tangent at the base, and the base is parallel to, but offset from, the axis of the parabola. Power series The power series includes the shape commonly referred to as a "parabolic" nose cone, but the shape correctly known as a parabolic nose cone is a member of the parabolic series (described above). The power series shape is characterized by its (usually) blunt tip, and by the fact that its base is not tangent to the body tube. There is always a discontinuity at the joint between nose cone and body that looks distinctly non-aerodynamic. The shape can be modified at the base to smooth out this discontinuity. Both a flat-faced cylinder and a cone are members of the power series. The power series nose shape is generated by rotating the curve about the -axis for values of less than . The factor controls the bluntness of the shape. For values of above about , the tip is fairly sharp. As decreases towards zero, the power series nose shape becomes increasingly blunt. For : Common values of include: Haack series Unlike all of the nose cone shapes above, Wolfgang Haack's series shapes are not constructed from geometric figures. The shapes are instead mathematically derived for the purpose of minimizing drag; a related shape with similar derivation being the Sears–Haack body. While the series is a continuous set of shapes determined by the value of in the equations below, two values of have particular significance: when , the notation signifies minimum drag for the given length and diameter, and when , indicates minimum drag for a given length and volume. The Haack series nose cones are not perfectly tangent to the body at their base except for the case where . However, the discontinuity is usually so slight as to be imperceptible. For , Haack nose cones bulge to a maximum diameter greater than the base diameter. Haack nose tips do not come to a sharp point, but are slightly rounded. Special values of (as described above) include: Von Kármán The Haack series designs giving minimum drag for the given length and diameter, the LD-Haack where , is commonly called the Von Kármán or Von Kármán ogive. Aerospike An aerospike can be used to reduce the forebody pressure acting on supersonic aircraft. The aerospike creates a detached shock ahead of the body, thus reducing the drag acting on the aircraft. Nose cone drag characteristics For aircraft and rockets, below Mach .8, the nose pressure drag is essentially zero for all shapes. The major significant factor is friction drag, which is largely dependent upon the wetted area, the surface smoothness of that area, and the presence of any discontinuities in the shape. For example, in strictly subsonic rockets a short, blunt, smooth elliptical shape is usually best. In the transonic region and beyond, where the pressure drag increases dramatically, the effect of nose shape on drag becomes highly significant. The factors influencing the pressure drag are the general shape of the nose cone, its fineness ratio, and its bluffness ratio. Influence of the general shape Many references on nose cone design contain empirical data comparing the drag characteristics of various nose shapes in different flight regimes. The chart shown here seems to be the most comprehensive and useful compilation of data for the flight regime of greatest interest. This chart generally agrees with more detailed, but less comprehensive data found in other references (most notably the USAF Datcom). In many nose cone designs, the greatest concern is flight performance in the transonic region from Mach0.8 to Mach1.2. Although data are not available for many shapes in the transonic region, the table clearly suggests that either the Von Kármán shape, or power series shape with , would be preferable to the popular conical or ogive shapes, for this purpose. This observation goes against the often-repeated conventional wisdom that a conical nose is optimum for "Mach-breaking". Fighter aircraft are probably good examples of nose shapes optimized for the transonic region, although their nose shapes are often distorted by other considerations of avionics and inlets. For example, an F-16 Fighting Falcon nose appears to be a very close match to a Von Kármán shape. Influence of the fineness ratio The ratio of the length of a nose cone compared to its base diameter is known as the fineness ratio. This is sometimes also called the aspect ratio, though that term is usually applied to wings and tails. Fineness ratio is often applied to the entire vehicle, considering the overall length and diameter. The length/diameter relation is also often called the caliber of a nose cone. At supersonic speeds, the fineness ratio has a significant effect on nose cone wave drag, particularly at low ratios; but there is very little additional gain for ratios increasing beyond 5:1. As the fineness ratio increases, the wetted area, and thus the skin friction component of drag, will also increase. Therefore, the minimum drag fineness ratio will ultimately be a trade-off between the decreasing wave drag and increasing friction drag. See also Index of aviation articles Bullet-nose curve Nose bullet Further reading References Aerodynamics Rocketry
Nose cone design
[ "Chemistry", "Engineering" ]
2,296
[ "Aerospace engineering", "Aerodynamics", "Rocketry", "Fluid dynamics" ]
1,695,214
https://en.wikipedia.org/wiki/Hilbert%27s%20ninth%20problem
Hilbert's ninth problem, from the list of 23 Hilbert's problems (1900), asked to find the most general reciprocity law for the norm residues of k-th order in a general algebraic number field, where k is a power of a prime. Progress made The problem was partially solved by Emil Artin by establishing the Artin reciprocity law which deals with abelian extensions of algebraic number fields. Together with the work of Teiji Takagi and Helmut Hasse (who established the more general Hasse reciprocity law), this led to the development of the class field theory, realizing Hilbert's program in an abstract fashion. Certain explicit formulas for norm residues were later found by Igor Shafarevich (1948; 1949; 1950). The non-abelian generalization, also connected with Hilbert's twelfth problem, is one of the long-standing challenges in number theory and is far from being complete. See also List of unsolved problems in mathematics References External links English translation of Hilbert's original address Algebraic number theory Unsolved problems in number theory 09
Hilbert's ninth problem
[ "Mathematics" ]
224
[ "Unsolved problems in mathematics", "Unsolved problems in number theory", "Hilbert's problems", "Algebraic number theory", "Mathematical problems", "Number theory" ]
1,695,231
https://en.wikipedia.org/wiki/Hilbert%27s%20fourteenth%20problem
In mathematics, Hilbert's fourteenth problem, that is, number 14 of Hilbert's problems proposed in 1900, asks whether certain algebras are finitely generated. The setting is as follows: Assume that k is a field and let K be a subfield of the field of rational functions in n variables, k(x1, ..., xn ) over k. Consider now the k-algebra R defined as the intersection Hilbert conjectured that all such algebras are finitely generated over k. Some results were obtained confirming Hilbert's conjecture in special cases and for certain classes of rings (in particular the conjecture was proved unconditionally for n = 1 and n = 2 by Zariski in 1954). Then in 1959 Masayoshi Nagata found a counterexample to Hilbert's conjecture. The counterexample of Nagata is a suitably constructed ring of invariants for the action of a linear algebraic group. History The problem originally arose in algebraic invariant theory. Here the ring R is given as a (suitably defined) ring of polynomial invariants of a linear algebraic group over a field k acting algebraically on a polynomial ring k[x1, ..., xn] (or more generally, on a finitely generated algebra defined over a field). In this situation the field K is the field of rational functions (quotients of polynomials) in the variables xi which are invariant under the given action of the algebraic group, the ring R is the ring of polynomials which are invariant under the action. A classical example in nineteenth century was the extensive study (in particular by Cayley, Sylvester, Clebsch, Paul Gordan and also Hilbert) of invariants of binary forms in two variables with the natural action of the special linear group SL2(k) on it. Hilbert himself proved the finite generation of invariant rings in the case of the field of complex numbers for some classical semi-simple Lie groups (in particular the general linear group over the complex numbers) and specific linear actions on polynomial rings, i.e. actions coming from finite-dimensional representations of the Lie-group. This finiteness result was later extended by Hermann Weyl to the class of all semi-simple Lie-groups. A major ingredient in Hilbert's proof is the Hilbert basis theorem applied to the ideal inside the polynomial ring generated by the invariants. Zariski's formulation Zariski's formulation of Hilbert's fourteenth problem asks whether, for a quasi-affine algebraic variety X over a field k, possibly assuming X normal or smooth, the ring of regular functions on X is finitely generated over k. Zariski's formulation was shown to be equivalent to the original problem, for X normal. (See also: Zariski's finiteness theorem.) Éfendiev F.F. (Fuad Efendi) provided symmetric algorithm generating basis of invariants of n-ary forms of degree r. Nagata's counterexample gave the following counterexample to Hilbert's problem. The field k is a field containing 48 elements a1i, ...,a16i, for i=1, 2, 3 that are algebraically independent over the prime field. The ring R is the polynomial ring k[x1,...,x16, t1,...,t16] in 32 variables. The vector space V is a 13-dimensional vector space over k consisting of all vectors (b1,...,b16) in k16 orthogonal to each of the three vectors (a1i, ...,a16i) for i=1, 2, 3. The vector space V is a 13-dimensional commutative unipotent algebraic group under addition, and its elements act on R by fixing all elements tj and taking xj to xj + bjtj. Then the ring of elements of R invariant under the action of the group V is not a finitely generated k-algebra. Several authors have reduced the sizes of the group and the vector space in Nagata's example. For example, showed that over any field there is an action of the sum G of three copies of the additive group on k18 whose ring of invariants is not finitely generated. See also Locally nilpotent derivation References Bibliography O. Zariski, Interpretations algebrico-geometriques du quatorzieme probleme de Hilbert, Bulletin des Sciences Mathematiques 78 (1954), pp. 155–168. Footnotes 14 Invariant theory
Hilbert's fourteenth problem
[ "Physics", "Mathematics" ]
949
[ "Symmetry", "Group actions", "Hilbert's problems", "Mathematical problems", "Invariant theory" ]
1,695,344
https://en.wikipedia.org/wiki/Gene%20conversion
Gene conversion is the process by which one DNA sequence replaces a homologous sequence such that the sequences become identical after the conversion. Gene conversion can be either allelic, meaning that one allele of the same gene replaces another allele, or ectopic, meaning that one paralogous DNA sequence converts another. Allelic gene conversion Allelic gene conversion occurs during meiosis when homologous recombination between heterozygotic sites results in a mismatch in base pairing. This mismatch is then recognized and corrected by the cellular machinery causing one of the alleles to be converted to the other. This can cause non-Mendelian segregation of alleles in germ cells. Nonallelic/ectopic gene conversion Recombination occurs not only during meiosis, but also as a mechanism for repair of double-strand breaks (DSBs) caused by DNA damage. These DSBs are usually repaired using the sister chromatid of the broken duplex and not the homologous chromosome, so they would not result in allelic conversion. Recombination also occurs between homologous sequences present at different genomic loci (paralogous sequences) which have resulted from previous gene duplications. Gene conversion occurring between paralogous sequences (ectopic gene conversion) is conjectured to be responsible for concerted evolution of gene families. Mechanism Conversion of one allele to the other is often due to base mismatch repair during homologous recombination: if one of the four chromatids during meiosis pairs up with another chromatid, as can occur because of sequence homology, DNA strand transfer can occur followed by mismatch repair. This can alter the sequence of one of the chromosomes, so that it is identical to the other. Meiotic recombination is initiated through formation of a double-strand break (DSB). The 5’ ends of the break are then degraded, leaving long 3’ overhangs of several hundred nucleotides. One of these 3’ single stranded DNA segments then invades a homologous sequence on the homologous chromosome, forming an intermediate which can be repaired through different pathways resulting either in crossovers (CO) or noncrossovers (NCO). At various steps of the recombination process, heteroduplex DNA (double-stranded DNA consisting of single strands from each of the two homologous chromosomes which may or may not be perfectly complementary) is formed. When mismatches occur in heteroduplex DNA, the sequence of one strand will be repaired to bind the other strand with perfect complementarity, leading to the conversion of one sequence to another. This repair process can follow either of two alternative pathways as illustrated in the Figure. By one pathway, a structure called a double Holliday junction (DHJ) is formed, leading to the exchange of DNA strands. By the other pathway, referred to as Synthesis Dependent Strand Annealing (SDSA), there is information exchange but not physical exchange. Gene conversion will occur during SDSA if the two DNA molecules are heterozygous at the site of the recombinational repair. Gene conversion may also occur during recombinational repair involving a DHJ, and this gene conversion may be associated with physical recombination of the DNA duplexes on the two sides of the DHJ. Biased vs. unbiased gene conversion Biased gene conversion (BGC) occurs when one allele has a higher probability of being the donor than the other in a gene conversion event. For example, when a T:G mismatch occurs, it would be more or less likely to be corrected to a C:G pair than a T:A pair. This gives that allele a higher probability of transmission to the next generation. Unbiased gene conversion means that both possibilities occur with equal probability. GC-biased gene conversion GC-biased gene conversion (gBGC) is the process by which the GC content of DNA increases due to gene conversion during recombination. Evidence for gBGC exists for yeasts and humans and the theory has more recently been tested in other eukaryotic lineages. In analyzed human DNA sequences, crossover rate has been found to correlate positively with GC-content. The pseudoautosomal regions (PAR) of the X and Y chromosomes in humans, which are known to have high recombination rates also have high GC contents. Certain mammalian genes undergoing concerted evolution (for example, ribosomal operons, tRNAs, and histone genes) are very GC-rich. It has been shown that GC content is higher in paralogous human and mouse histone genes that are members of large subfamilies (presumably undergoing concerted evolution) than in paralogous histone genes with relatively unique sequences. There is also evidence for GC bias in the mismatch repair process. It is thought that this may be an adaptation to the high rate of methyl-cytosine deamination which can lead to C→T transitions. BGC of the Fxy gene in Mus musculus The Fxy or Mid1 gene in some mammals closely related to house mice (humans, rats, and other Mus species) is located in the sex-linked region of the X chromosome. However, in Mus musculus, it has recently translocated such that the 3’ end of the gene overlaps with the PAR region of the X-chromosome, which is known to be a recombination hotspot. This portion of the gene has experienced a dramatic increase in GC content and substitution rate at the 3rd codon position as well as in introns but the 5’ region of the gene, which is X-linked, has not. Because this effect is present only in the region of the gene experiencing increased recombination rate, it must be due to biased gene conversion and not selective pressure. Impact of GC-biased gene conversion on human genomic patterns GC content varies widely in the human genome (40–80%), but there seem to be large sections of the genome where GC content is, on average, higher or lower than in other regions. These regions, although not always showing clear boundaries, are known as isochores. One possible explanation for the presence of GC-rich isochores is that they evolved due to GC-biased gene conversion in regions with high levels of recombination. Evolutionary importance Adaptive function of recombination Studies of gene conversion have contributed to our understanding of the adaptive function of meiotic recombination. The ordinary segregation pattern of an allele pair (Aa) among the 4 products of meiosis is 2A:2a. Detection of infrequent gene conversion events (e.g. 3:1 or 1:3 segregation patterns during individual meioses) provides insight into the alternate pathways of recombination leading either to crossover or non-crossover chromosomes. Gene conversion events are thought to arise where the "A" and "a" alleles happen to be near the exact location of a molecular recombination event. Thus, it is possible to measure the frequency with which gene conversion events are associated with crossover or non-crossover of chromosomal regions adjacent to, but outside, the immediate conversion event. Numerous studies of gene conversion in various fungi (which are especially suited for such studies) have been carried out, and the findings of these studies have been reviewed by Whitehouse. It is clear from this review that most gene conversion events are not associated with outside marker exchange. Thus, most gene conversion events in the several different fungi studied are associated with non-crossover of outside markers. Non-crossover gene conversion events are mainly produced by Synthesis Dependent Strand Annealing (SDSA). This process involves limited informational exchange, but not physical exchange of DNA, between the two participating homologous chromosomes at the site of the conversion event, and little genetic variation is produced. Thus, explanations for the adaptive function of meiotic recombination that focus exclusively on the adaptive benefit of producing new genetic variation or physical exchange seem inadequate to explain the majority of recombination events during meiosis. However, the majority of meiotic recombination events can be explained by the proposal that they are an adaptation for repair of damage in the DNA that is to be passed on to gametes. Of particular interest, from the point of view that recombination is an adaptation for DNA repair, are the studies in yeast showing that gene conversion in mitotic cells is increased by UV and ionizing radiation Evolution of humans In the discussions of genetic diseases in humans, pseudogene mediated gene conversions that introduce pathogenic mutations into functional genes is a well known mechanism of mutation. In contrast, it is possible that pseudogenes could serve as templates. During the course of evolution, functional source genes which are potentially advantageous have been derived from multiple copies in their single source gene. The pseudogene-templated changes might eventually become fixed as long as they did not possess deleterious effects. So, in fact, pseudogenes can act as sources of sequence variants which can be transferred to functional genes in novel combinations and can be acted upon by selection. Lectin 11 (SIGLEC11), a human immunoglobulin that binds to sialic acid, can be considered an example of such a gene conversion event which has played a significant role in evolution. While comparing the homologous genes of human SIGLEC11 and its pseudogene in the chimpanzee, gorilla and orangutan, it appears that there was gene conversion of the sequence of 5’ upstream regions and the exons that encode the sialic acid recognition domain, approximately 2kbp from the closely flanking hSIGLECP16 pseudogene (Hayakawa et al., 2005). The three pieces of evidence concerning this event have together suggested this as an adaptive change which is very evolutionarily important in genus Homo. Those includes that only in human lineage this gene conversion happened, the brain cortex has acquired an important expression of SIGLEC11 specifically in human lineage and the exhibition of a change in substrate binding in human lineage when compared to that of its counterpart in chimpanzees. Of course the frequency of the contribution of this pseudogene-mediated gene conversion mechanism to functional and adaptive changes in evolution of human is still unknown and so far it has been scarcely explored. In spite of that, the introduction of positively selective genetic changes by such mechanism can be put forward for consideration by the example of SIGLEC11. Sometimes due to interference of transposable elements in to some members of a gene family, it causes a variation among them and finally it may also cease the rate of gene conversion due to lack of sequence similarity which leads to divergent evolution. Genomic analysis From various genome analyses, it was concluded that the double-strand breaks (DSB) can be repaired via homologous recombination by at least two different but related pathways. In case of major pathway, homologous sequences on both sides of the DSB will be employed which seems to be analogous to the conservative DSB repair model that was originally proposed for meiotic recombination in yeast. where as the minor pathway is restricted to only one side of the DSB as postulated by nonconservative one-sided invasion model. However, in both cases the sequence of the recombination partners will be absolutely conserved. By virtue of their high degree of homology, the new gene copies that came into existence following the gene duplication naturally tend to either unequal crossover or unidirectional gene conversion events. In the latter process, there exists the acceptor and donor sequences and the acceptor sequence will be replaced by a sequence copied from the donor, while the sequence of the donor remains unchanged. The effective homology between the interacting sequences makes the gene conversion event successful. Additionally, the frequency of gene conversion is inversely proportional to the distance between the interacting sequences in cis, and the rate of gene conversion is usually directly proportional to the length of uninterrupted sequence tract in the assumed converted region. It seems that conversion tracts accompanying crossover are longer (mean length = ~460 bp) than conversion tracts without crossover (mean length = 55–290 bp). In the studies of human globulin genes, it has long been supported that the gene conversion event or branch migration events can either be promoted or inhibited by the specific motifs that exist in the vicinity of the DNA sequence. Another basic classification of gene conversion events is the interlocus (also called nonallelic) and interallelic gene conversions. The cis or trans nonallelic or interlocus gene conversion events occur between nonallelic gene copies residing on sister chromatids or homologous chromosomes, and, in case of interallelic, the gene conversion events take place between alleles residing on homologous chromosomes. If the interlocus gene conversion events are compared, it will be frequently revealed that they exhibit biased directionality. Sometimes, such as in case of human globin genes, the gene conversion direction correlates with the relative expression levels of the genes that participate in the event, with the gene expressed at higher level, called the 'master' gene, converting that with lower expression, called the 'slave' gene. Originally formulated in an evolutionary context, the 'master/slave gene' rule should be explained with caution. In fact, the increase in gene transcription exhibits not only the increase in likelihood of it to be used as a donor but also as an acceptor. Effect Normally, an organism that has inherited different copies of a gene from each of its parents is called heterozygous. This is generically represented as genotype: Aa (i.e. one copy of variant (allele) 'A', and one copy of allele 'a'). When a heterozygote creates gametes by meiosis, the alleles normally duplicate and end up in a 2:2 ratio in the resulting 4 cells that are the direct products of meiosis. However, in gene conversion, a ratio other than the expected 2A:2a is observed, in which are the two alleles. Examples are 3A:1a and 1A:3a. In other words, there can, for example, be three times as many A alleles as expressed in the daughter cells, as is the case in 3A:1a. Medical relevance Gene conversion resulting in mutation of the CYP21A2 gene is a common underlying genetic cause of congenital adrenal hyperplasia. Somatic gene conversion is one of the mechanisms that can result in familial retinoblastoma, a congenital cancer of the retina, and it is theorized that gene conversion may play a role in the development of Huntington's disease. References External links images: http://www.web-books.com/MoBio/Free/Ch8D4.htm and http://www.web-books.com/MoBio/Free/Ch8D2.htm Genes Modification of genetic information Molecular evolution
Gene conversion
[ "Chemistry", "Biology" ]
3,153
[ "Evolutionary processes", "Molecular evolution", "Modification of genetic information", "Molecular genetics", "Molecular biology" ]
17,911,568
https://en.wikipedia.org/wiki/Purine%20analogue
Purine analogues are antimetabolites that mimic the structure of metabolic purines. Examples Nucleobase analogues Thiopurines such as thioguanine are used to treat acute leukemias and remissions in acute granulocytic leukemias. Azathioprine is the main immunosuppressive cytotoxic substance. A prodrug, it is widely used in transplantation to control rejection reactions. It is nonenzymatically cleaved to mercaptopurine, a purine analogue that inhibits DNA synthesis. By preventing the clonal expansion of lymphocytes in the induction phase of the immune response, it affects both cell immunity and humoral immunity. It also successfully suppresses autoimmunity. Mercaptopurine Thioguanine Nucleoside analogues Clofarabine Pentostatin and cladribine are adenosine analogs that are used primarily to treat hairy cell leukemia. Nucleotide analogues Fludarabine inhibits multiple DNA polymerases, DNA primase, and DNA ligase I, and is S phase-specific (since these enzymes are highly active during DNA replication). Medical uses Purine antimetabolites are commonly used to treat cancer by interfering with DNA replication. References Antimetabolites
Purine analogue
[ "Chemistry", "Biology" ]
276
[ "Antimetabolites", "Biotechnology stubs", "Biochemistry stubs", "Biochemistry", "Metabolism" ]
17,911,569
https://en.wikipedia.org/wiki/Pyrimidine%20analogue
Pyrimidine analogues are antimetabolites which mimic the structure of metabolic pyrimidines. Examples Nucleobase analogues Fluorouracil (5FU), which inhibits thymidylate synthase Floxuridine (FUDR) 6-azauracil (6-AU) Nucleoside analogues Cytarabine (Cytosine arabinoside) Gemcitabine Nucleotide analogues Medical uses Pyrimidine antimetabolites are commonly used to treat cancer by interfering with DNA replication. References Antimetabolites Metabolism Pyrimidines
Pyrimidine analogue
[ "Chemistry", "Biology" ]
132
[ "Antimetabolites", "Biotechnology stubs", "Biochemistry stubs", "Cellular processes", "Biochemistry", "Metabolism" ]
17,914,618
https://en.wikipedia.org/wiki/1806%E2%88%9220%20cluster
1806−20 (originally named the SGR 1806−20 cluster) is a heavily obscured star cluster on the far side of the Milky Way, approximately 28,000 light-years distant. Some sources claim as far as 50,000. It contains the soft gamma repeater SGR 1806−20 and the luminous blue variable hypergiant LBV 1806−20, a candidate for the most luminous star in the Milky Way. LBV 1806−20 and many of the other massive stars in the cluster are thought likely to end as supernovas in a few million years, leaving only neutron stars or black holes as remnants. The cluster is heavily obscured by intervening dust, and mostly visible in the infrared. It is part of the larger W31 H II region and giant molecular cloud. It has a compact core of ~0.2 pc in diameter with a more extended halo of ~2 pc in diameter containing the LBV and at least three Wolf–Rayet stars (of types WC8, WN6, and WN7) and an OB supergiant, plus other young massive stars. See also Wolf–Rayet star LBV 1806−20 SGR 1806−20 Hypergiant Star cluster Luminous blue variable Charles Wolf Georges Rayet References External links The Unusual High-Mass Star Cluster 1806−20 Sagittarius (constellation) Open clusters
1806−20 cluster
[ "Astronomy" ]
277
[ "Sagittarius (constellation)", "Constellations" ]
17,915,719
https://en.wikipedia.org/wiki/American%20Academy%20of%20Environmental%20Engineers%20and%20Scientists
The American Academy of Environmental Engineers and Scientists (AAEES) is a society of professional engineers and scientists who have demonstrated special expertise in environmental engineering or science beyond that normally required for professional practice. The principal purpose of the Academy is serving the public by improving the practice, elevating the standards, and advancing public recognition of environmental engineering and science through a program of specialty certification, similar to that used in healthcare and other professions. History The Academy began in 1952, when a group of sanitary engineers working in the public health and defense communities expressed concern about the requirements for professional practice. This led to the creation of the Committee for the Advancement of Sanitary Engineering of the American Society of Civil Engineers (ASCE). Later, the Joint Committee for the Advancement of Sanitary Engineering was formed, composed of representatives from the ASCE, the American Public Health Association, the American Society for Engineering Education, the American Water Works Association and the Water Pollution Control Federation. With the sponsorship and support of these five organizations, the American Sanitary Engineering Intersociety Board was incorporated under the laws of the State of Delaware on October 21, 1955. Certified sanitary engineers were identified as Diplomates (now also known as Board Certified Environmental Engineers) of the American Academy of Sanitary Engineers, and were listed on a roster with the names of each member and their specialties as recognized by the Board. Other organizations joining as sponsors included the American Institute of Chemical Engineers in 1957 and the Air Pollution Control Association in 1962. In 1966 the name of the Board was changed to the "Environmental Engineering Intersociety Board" (EEIB) and the name of the roster from the "American Academy of Sanitary Engineers" to the "American Academy of Environmental Engineers" (AAEE). A year later, in 1967, the Board of Trustees changed the American Academy of Environmental Engineers from just a roster of certified engineers to an organization with its own rights, bylaws and officers. This new organization was charged with working cooperatively with the EEIB in the advancement of all aspects of environmental engineering. The American Public Works Association joined as a sponsor of these organizations in 1969. The AAEE and the EEIB were merged into one organization in 1973. In 1976 the National Society of Professional Engineers became a sponsor of the Academy and it was followed by the Association of Environmental Engineering and Science Professors in 1977, the American Society of Mechanical Engineers in 1981 and the Solid Waste Association of North America in 1988. In 2011, the Academy decided to award board certifications to environmental scientists, which led to the adoption of its current name on 1 January 2013. Certification In addition to education and experience, Board-Certified Environmental Engineers and Scientists must have passed written and oral examinations and reviews by an admission panel of the Academy. The Academy's certification program is accredited by the Council of Engineering and Scientific Specialty Boards. Requirements for board certification include a baccalaureate or higher degree in engineering or science from an accredited university, not less than eight years of professional engineering or scientific experience, and an examination on the practice of environmental engineering or science in a particular area of specialization. Exams are offered on the following topics for engineers: Air Pollution Control, Environmental Sustainability, General Environmental Engineering, Hazardous Waste Management, Industrial Hygiene, Radiation Protection, Solid Waste Management and Water Supply/Wastewater Management. Scientists may choose from tests covering Air Resources, Environmental Biology, Environmental Chemistry, Environmental Microbiology, Environmental Toxicology, Groundwater and the Subsurface Environment, Surface Water Resources, or Sustainability Science. Recognition To recognize outstanding practice, the AAEES conducts an annual Excellence in Environmental Engineering and Science Awards Competition where Grand Prizes are awarded to projects submitted in one of nine categories, and a Superior Achievement Award is made to the project with the highest overall score in the annual competition. Additional annual competitions include the Environmental Communication Award Competition as well as the Student Video and Social Media Competition. Individual awards by the AAEES include: AAEES Science Award, Excellence in Environmental Engineering and Science Education, Gordon Maskew Fair Award, Edward J. Cleary Award, Stanley E. Kappe Award, Honorary Member, and International Honorary Member. In collaboration with the Environmental Engineering and Science Foundation (EESF), the AAEES helps to select the recipients of: W. Wesley Eckenfelder Graduate Research Award, Frederick G. Pohland Medal, W. Brewster Snow Award, and Paul F. Boulos Excellence in Computational Hydraulics/Hydrology Award. References External links Official site American engineering organizations . . Professional associations based in the United States Organizations established in 1955 1955 establishments in the United States Engineering societies based in the United States Environmental organizations based in Maryland Environmental health organizations
American Academy of Environmental Engineers and Scientists
[ "Chemistry", "Engineering", "Environmental_science" ]
929
[ "American environmental scientists", "Environmental scientists", "Environmental engineers", "Environmental engineering" ]
17,919,602
https://en.wikipedia.org/wiki/Hierarchical%20and%20recursive%20queries%20in%20SQL
A hierarchical query is a type of SQL query that handles hierarchical model data. They are special cases of more general recursive fixpoint queries, which compute transitive closures. In standard SQL:1999 hierarchical queries are implemented by way of recursive common table expressions (CTEs). Unlike Oracle's earlier connect-by clause, recursive CTEs were designed with fixpoint semantics from the beginning. Recursive CTEs from the standard were relatively close to the existing implementation in IBM DB2 version 2. Recursive CTEs are also supported by Microsoft SQL Server (since SQL Server 2008 R2), Firebird 2.1, PostgreSQL 8.4+, SQLite 3.8.3+, IBM Informix version 11.50+, CUBRID, MariaDB 10.2+ and MySQL 8.0.1+. Tableau has documentation describing how CTEs can be used. TIBCO Spotfire does not support CTEs, while Oracle 11g Release 2's implementation lacks fixpoint semantics. Without common table expressions or connected-by clauses it is possible to achieve hierarchical queries with user-defined recursive functions. Common table expression A common table expression, or CTE, (in SQL) is a temporary named result set, derived from a simple query and defined within the execution scope of a SELECT, INSERT, UPDATE, or DELETE statement. CTEs can be thought of as alternatives to derived tables (subquery), views, and inline user-defined functions. Common table expressions are supported by Teradata (starting with version 14), IBM Db2, Informix (starting with version 14.1), Firebird (starting with version 2.1), Microsoft SQL Server (starting with version 2005), Oracle (with recursion since 11g release 2), PostgreSQL (since 8.4), MariaDB (since 10.2), MySQL (since 8.0), SQLite (since 3.8.3), HyperSQL, Informix (since 14.10), Google BigQuery, Sybase (starting with version 9), Vertica, H2 (experimental), and many others. Oracle calls CTEs "subquery factoring". The syntax for a CTE (which may or may not be recursive) is as follows: WITH [RECURSIVE] with_query [, ...] SELECT ... where with_query's syntax is: query_name [ (column_name [,...]) ] AS (SELECT ...) Recursive CTEs can be used to traverse relations (as graphs or trees) although the syntax is much more involved because there are no automatic pseudo-columns created (like LEVEL below); if these are desired, they have to be created in the code. See MSDN documentation or IBM documentation for tutorial examples. The RECURSIVE keyword is not usually needed after WITH in systems other than PostgreSQL. In SQL:1999 a recursive (CTE) query may appear anywhere a query is allowed. It's possible, for example, to name the result using CREATE [RECURSIVE] VIEW. Using a CTE inside an INSERT INTO, one can populate a table with data generated from a recursive query; random data generation is possible using this technique without using any procedural statements. Some Databases, like PostgreSQL, support a shorter CREATE RECURSIVE VIEW format which is internally translated into WITH RECURSIVE coding. An example of a recursive query computing the factorial of numbers from 0 to 9 is the following: WITH recursive temp (n, fact) AS ( SELECT 0, 1 -- Initial Subquery UNION ALL SELECT n+1, (n+1)*fact FROM temp WHERE n < 9 -- Recursive Subquery ) SELECT * FROM temp; CONNECT BY An alternative syntax is the non-standard CONNECT BY construct; it was introduced by Oracle in the 1980s. Prior to Oracle 10g, the construct was only useful for traversing acyclic graphs because it returned an error on detecting any cycles; in version 10g Oracle introduced the NOCYCLE feature (and keyword), making the traversal work in the presence of cycles as well. CONNECT BY is supported by Snowflake, EnterpriseDB, Oracle database, CUBRID, IBM Informix and IBM Db2 although only if it is enabled as a compatibility mode. The syntax is as follows: SELECT select_list FROM table_expression [ WHERE ... ] [ START WITH start_expression ] CONNECT BY [NOCYCLE] { PRIOR child_expr = parent_expr | parent_expr = PRIOR child_expr } [ ORDER SIBLINGS BY column1 [ ASC | DESC ] [, column2 [ ASC | DESC ] ] ... ] [ GROUP BY ... ] [ HAVING ... ] ... For example, SELECT LEVEL, LPAD (' ', 2 * (LEVEL - 1)) || ename "employee", empno, mgr "manager" FROM emp START WITH mgr IS NULL CONNECT BY PRIOR empno = mgr; The output from the above query would look like: level | employee | empno | manager -------+-------------+-------+--------- 1 | KING | 7839 | 2 | JONES | 7566 | 7839 3 | SCOTT | 7788 | 7566 4 | ADAMS | 7876 | 7788 3 | FORD | 7902 | 7566 4 | SMITH | 7369 | 7902 2 | BLAKE | 7698 | 7839 3 | ALLEN | 7499 | 7698 3 | WARD | 7521 | 7698 3 | MARTIN | 7654 | 7698 3 | TURNER | 7844 | 7698 3 | JAMES | 7900 | 7698 2 | CLARK | 7782 | 7839 3 | MILLER | 7934 | 7782 (14 rows) Pseudo-columns Unary operators The following example returns the last name of each employee in department 10, each manager above that employee in the hierarchy, the number of levels between manager and employee, and the path between the two: SELECT ename "Employee", CONNECT_BY_ROOT ename "Manager", LEVEL-1 "Pathlen", SYS_CONNECT_BY_PATH(ename, '/') "Path" FROM emp WHERE LEVEL > 1 AND deptno = 10 CONNECT BY PRIOR empno = mgr ORDER BY "Employee", "Manager", "Pathlen", "Path"; Functions SYS_CONNECT_BY_PATH See also Datalog also implements fixpoint queries Regular path queries are a specific kind of recursive query in graph databases Deductive databases Hierarchical model Reachability Transitive closure Tree structure References Further reading Academic textbooks. Note that these cover only the SQL:1999 standard (and Datalog), but not the Oracle extension. Chapter 24. External links https://stackoverflow.com/questions/1731889/cycle-detection-with-recursive-subquery-factoring http://explainextended.com/2009/11/18/sql-server-are-the-recursive-ctes-really-set-based/ https://web.archive.org/web/20131114094211/http://gennick.com/with.html http://www.cs.duke.edu/courses/fall04/cps116/lectures/11-recursion.pdf http://www.blacktdn.com.br/2015/06/blacktdn-mssql-usando-consulta-cte.html Database management systems SQL Articles with example SQL code Recursion
Hierarchical and recursive queries in SQL
[ "Mathematics" ]
1,708
[ "Mathematical logic", "Recursion" ]
17,920,440
https://en.wikipedia.org/wiki/Slave%20boson
The slave boson method is a technique for dealing with models of strongly correlated systems, providing a method to second-quantize valence fluctuations within a restrictive manifold of states. In the 1960s the physicist John Hubbard introduced an operator, now named the "Hubbard operator" to describe the creation of an electron within a restrictive manifold of valence configurations. Consider for example, a rare earth or actinide ion in which strong Coulomb interactions restrict the charge fluctuations to two valence states, such as the Ce4+(4f0) and Ce3+ (4f1) configurations of a mixed-valence cerium compound. The corresponding quantum states of these two states are the singlet state and the magnetic state, where is the spin. The fermionic Hubbard operators that link these states are then The algebra of operators is closed by introducing the two bosonic operators Together, these operators satisfy the graded Lie algebra where the and the sign is chosen to be negative, unless both and are fermions, in which case it is positive. The Hubbard operators are the generators of the super group SU(2|1). This non-canonical algebra means that these operators do not satisfy a Wick's theorem, which prevents a conventional diagrammatic or field theoretic treatment. In 1983 Piers Coleman introduced the slave boson formulation of the Hubbard operators, which enabled valence fluctuations to be treated within a field-theoretic approach. In this approach, the spinless configuration of the ion is represented by a spinless "slave boson" , whereas the magnetic configuration is represented by an Abrikosov slave fermion. From these considerations, it is seen that the Hubbard operators can be written as and This factorization of the Hubbard operators faithfully preserves the graded Lie algebra. Moreover, the Hubbard operators so written commute with the conserved quantity In Hubbard's original approach, , but by generalizing this quantity to larger values, higher irreducible representations of SU(2|1) are generated. The slave boson representation can be extended from two component to component fermions, where the spin index runs over values. By allowing to become large, while maintaining the ratio , it is possible to develop a controlled large- expansion. The slave boson approach has since been widely applied to strongly correlated electron systems, and has proven useful in developing the resonating valence bond theory (RVB) of high temperature superconductivity and the understanding of heavy fermion compounds. Bibliography Condensed matter physics
Slave boson
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
516
[ "Materials science stubs", "Phases of matter", "Materials science", "Condensed matter physics", "Condensed matter stubs", "Matter" ]
17,921,343
https://en.wikipedia.org/wiki/Buthionine%20sulfoximine
Buthionine sulfoximine (BSO) is a sulfoximine derivative which reduces levels of glutathione and is being investigated as an adjunct with chemotherapy in the treatment of cancer. The compound inhibits gamma-glutamylcysteine synthetase, the enzyme required in the first step of glutathione synthesis. Buthionine sulfoximine may also be used to increase the sensitivity of parasites to oxidative antiparasitic drugs. References Alpha-Amino acids Antineoplastic drugs Sulfoximines
Buthionine sulfoximine
[ "Chemistry" ]
120
[ "Sulfoximines", "Functional groups", "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
17,922,833
https://en.wikipedia.org/wiki/OU%20Andromedae
OU Andromedae (also HR 9024) is a rotationally variable star in the constellation Andromeda. Varying between magnitudes 5.87 and 5.94, it has been classified as an FK Comae Berenices variable, but the classification is still uncertain. It has a spectral classification of G1IIIe, meaning that it is a giant star that shows emission lines in its spectrum. It is also likely in its horizontal branch phase of evolution. In 1985, Jeffrey Hopkins et al. discovered that HR 9024 is a variable star, with a period of ~23.3 days. It was given the variable star designation OU Andromedae in 1986. Paola Testa et al. reported that the star showed X-ray flare activity, in 2007. Fast rotation The spin rate of OU Andromedae is unusually high for an evolved star of this type, showing a projected rotational velocity of 21.5 km/s. One possible explanation is that it may have engulfed a nearby giant planet, such as a hot Jupiter, since an infrared excess has been observed. Another explanation relies on its strong magnetic field; if OU Andromedae was an Ap star during its main sequence stage of evolution, it could have retained both the strong magnetic field and the fast rotation of Ap stars. X-ray source OU Andromedae is a bright X-ray source, due to the activity of its corona; it's estimated that solar-like active regions cover 30% of the surface. This is another effect of the strong magnetic field, which produces an uninterrupted flaring activity that generates a large volume of hot plasma at coronal temperatures (~7.5 K). References Andromeda (constellation) 223460 117503 9024 Andromedae, OU G-type giants FK Comae Berenices variables Durchmusterung objects Emission-line stars Horizontal-branch stars Hertzsprung gap
OU Andromedae
[ "Astronomy" ]
403
[ "Andromeda (constellation)", "Constellations" ]
18,973,446
https://en.wikipedia.org/wiki/Geometry
Geometry (; ) is a branch of mathematics concerned with properties of space such as the distance, shape, size, and relative position of figures. Geometry is, along with arithmetic, one of the oldest branches of mathematics. A mathematician who works in the field of geometry is called a geometer. Until the 19th century, geometry was almost exclusively devoted to Euclidean geometry, which includes the notions of point, line, plane, distance, angle, surface, and curve, as fundamental concepts. Originally developed to model the physical world, geometry has applications in almost all sciences, and also in art, architecture, and other activities that are related to graphics. Geometry also has applications in areas of mathematics that are apparently unrelated. For example, methods of algebraic geometry are fundamental in Wiles's proof of Fermat's Last Theorem, a problem that was stated in terms of elementary arithmetic, and remained unsolved for several centuries. During the 19th century several discoveries enlarged dramatically the scope of geometry. One of the oldest such discoveries is Carl Friedrich Gauss's ("remarkable theorem") that asserts roughly that the Gaussian curvature of a surface is independent from any specific embedding in a Euclidean space. This implies that surfaces can be studied intrinsically, that is, as stand-alone spaces, and has been expanded into the theory of manifolds and Riemannian geometry. Later in the 19th century, it appeared that geometries without the parallel postulate (non-Euclidean geometries) can be developed without introducing any contradiction. The geometry that underlies general relativity is a famous application of non-Euclidean geometry. Since the late 19th century, the scope of geometry has been greatly expanded, and the field has been split in many subfields that depend on the underlying methods—differential geometry, algebraic geometry, computational geometry, algebraic topology, discrete geometry (also known as combinatorial geometry), etc.—or on the properties of Euclidean spaces that are disregarded—projective geometry that consider only alignment of points but not distance and parallelism, affine geometry that omits the concept of angle and distance, finite geometry that omits continuity, and others. This enlargement of the scope of geometry led to a change of meaning of the word "space", which originally referred to the three-dimensional space of the physical world and its model provided by Euclidean geometry; presently a geometric space, or simply a space is a mathematical structure on which some geometry is defined. History The earliest recorded beginnings of geometry can be traced to ancient Mesopotamia and Egypt in the 2nd millennium BC. Early geometry was a collection of empirically discovered principles concerning lengths, angles, areas, and volumes, which were developed to meet some practical need in surveying, construction, astronomy, and various crafts. The earliest known texts on geometry are the Egyptian Rhind Papyrus (2000–1800 BC) and Moscow Papyrus (), and the Babylonian clay tablets, such as Plimpton 322 (1900 BC). For example, the Moscow Papyrus gives a formula for calculating the volume of a truncated pyramid, or frustum. Later clay tablets (350–50 BC) demonstrate that Babylonian astronomers implemented trapezoid procedures for computing Jupiter's position and motion within time-velocity space. These geometric procedures anticipated the Oxford Calculators, including the mean speed theorem, by 14 centuries. South of Egypt the ancient Nubians established a system of geometry including early versions of sun clocks. In the 7th century BC, the Greek mathematician Thales of Miletus used geometry to solve problems such as calculating the height of pyramids and the distance of ships from the shore. He is credited with the first use of deductive reasoning applied to geometry, by deriving four corollaries to Thales's theorem. Pythagoras established the Pythagorean School, which is credited with the first proof of the Pythagorean theorem, though the statement of the theorem has a long history. Eudoxus (408–) developed the method of exhaustion, which allowed the calculation of areas and volumes of curvilinear figures, as well as a theory of ratios that avoided the problem of incommensurable magnitudes, which enabled subsequent geometers to make significant advances. Around 300 BC, geometry was revolutionized by Euclid, whose Elements, widely considered the most successful and influential textbook of all time, introduced mathematical rigor through the axiomatic method and is the earliest example of the format still used in mathematics today, that of definition, axiom, theorem, and proof. Although most of the contents of the Elements were already known, Euclid arranged them into a single, coherent logical framework. The Elements was known to all educated people in the West until the middle of the 20th century and its contents are still taught in geometry classes today. Archimedes () of Syracuse, Italy used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave remarkably accurate approximations of pi. He also studied the spiral bearing his name and obtained formulas for the volumes of surfaces of revolution. Indian mathematicians also made many important contributions in geometry. The Shatapatha Brahmana (3rd century BC) contains rules for ritual geometric constructions that are similar to the Sulba Sutras. According to , the Śulba Sūtras contain "the earliest extant verbal expression of the Pythagorean Theorem in the world, although it had already been known to the Old Babylonians. They contain lists of Pythagorean triples, which are particular cases of Diophantine equations. In the Bakhshali manuscript, there are a handful of geometric problems (including problems about volumes of irregular solids). The Bakhshali manuscript also "employs a decimal place value system with a dot for zero." Aryabhata's Aryabhatiya (499) includes the computation of areas and volumes. Brahmagupta wrote his astronomical work in 628. Chapter 12, containing 66 Sanskrit verses, was divided into two sections: "basic operations" (including cube roots, fractions, ratio and proportion, and barter) and "practical mathematics" (including mixture, mathematical series, plane figures, stacking bricks, sawing of timber, and piling of grain). In the latter section, he stated his famous theorem on the diagonals of a cyclic quadrilateral. Chapter 12 also included a formula for the area of a cyclic quadrilateral (a generalization of Heron's formula), as well as a complete description of rational triangles (i.e. triangles with rational sides and rational areas). In the Middle Ages, mathematics in medieval Islam contributed to the development of geometry, especially algebraic geometry. Al-Mahani (b. 853) conceived the idea of reducing geometrical problems such as duplicating the cube to problems in algebra. Thābit ibn Qurra (known as Thebit in Latin) (836–901) dealt with arithmetic operations applied to ratios of geometrical quantities, and contributed to the development of analytic geometry. Omar Khayyam (1048–1131) found geometric solutions to cubic equations. The theorems of Ibn al-Haytham (Alhazen), Omar Khayyam and Nasir al-Din al-Tusi on quadrilaterals, including the Lambert quadrilateral and Saccheri quadrilateral, were part of a line of research on the parallel postulate continued by later European geometers, including Vitello (), Gersonides (1288–1344), Alfonso, John Wallis, and Giovanni Girolamo Saccheri, that by the 19th century led to the discovery of hyperbolic geometry. In the early 17th century, there were two important developments in geometry. The first was the creation of analytic geometry, or geometry with coordinates and equations, by René Descartes (1596–1650) and Pierre de Fermat (1601–1665). This was a necessary precursor to the development of calculus and a precise quantitative science of physics. The second geometric development of this period was the systematic study of projective geometry by Girard Desargues (1591–1661). Projective geometry studies properties of shapes which are unchanged under projections and sections, especially as they relate to artistic perspective. Two developments in geometry in the 19th century changed the way it had been studied previously. These were the discovery of non-Euclidean geometries by Nikolai Ivanovich Lobachevsky, János Bolyai and Carl Friedrich Gauss and of the formulation of symmetry as the central consideration in the Erlangen programme of Felix Klein (which generalized the Euclidean and non-Euclidean geometries). Two of the master geometers of the time were Bernhard Riemann (1826–1866), working primarily with tools from mathematical analysis, and introducing the Riemann surface, and Henri Poincaré, the founder of algebraic topology and the geometric theory of dynamical systems. As a consequence of these major changes in the conception of geometry, the concept of "space" became something rich and varied, and the natural background for theories as different as complex analysis and classical mechanics. Main concepts The following are some of the most important concepts in geometry. Axioms Euclid took an abstract approach to geometry in his Elements, one of the most influential books ever written. Euclid introduced certain axioms, or postulates, expressing primary or self-evident properties of points, lines, and planes. He proceeded to rigorously deduce other properties by mathematical reasoning. The characteristic feature of Euclid's approach to geometry was its rigor, and it has come to be known as axiomatic or synthetic geometry. At the start of the 19th century, the discovery of non-Euclidean geometries by Nikolai Ivanovich Lobachevsky (1792–1856), János Bolyai (1802–1860), Carl Friedrich Gauss (1777–1855) and others led to a revival of interest in this discipline, and in the 20th century, David Hilbert (1862–1943) employed axiomatic reasoning in an attempt to provide a modern foundation of geometry. Spaces and subspaces Points Points are generally considered fundamental objects for building geometry. They may be defined by the properties that they must have, as in Euclid's definition as "that which has no part", or in synthetic geometry. In modern mathematics, they are generally defined as elements of a set called space, which is itself axiomatically defined. With these modern definitions, every geometric shape is defined as a set of points; this is not the case in synthetic geometry, where a line is another fundamental object that is not viewed as the set of the points through which it passes. However, there are modern geometries in which points are not primitive objects, or even without points. One of the oldest such geometries is Whitehead's point-free geometry, formulated by Alfred North Whitehead in 1919–1920. Lines Euclid described a line as "breadthless length" which "lies equally with respect to the points on itself". In modern mathematics, given the multitude of geometries, the concept of a line is closely tied to the way the geometry is described. For instance, in analytic geometry, a line in the plane is often defined as the set of points whose coordinates satisfy a given linear equation, but in a more abstract setting, such as incidence geometry, a line may be an independent object, distinct from the set of points which lie on it. In differential geometry, a geodesic is a generalization of the notion of a line to curved spaces. Planes In Euclidean geometry a plane is a flat, two-dimensional surface that extends infinitely; the definitions for other types of geometries are generalizations of that. Planes are used in many areas of geometry. For instance, planes can be studied as a topological surface without reference to distances or angles; it can be studied as an affine space, where collinearity and ratios can be studied but not distances; it can be studied as the complex plane using techniques of complex analysis; and so on. Curves A curve is a 1-dimensional object that may be straight (like a line) or not; curves in 2-dimensional space are called plane curves and those in 3-dimensional space are called space curves. In topology, a curve is defined by a function from an interval of the real numbers to another space. In differential geometry, the same definition is used, but the defining function is required to be differentiable. Algebraic geometry studies algebraic curves, which are defined as algebraic varieties of dimension one. Surfaces A surface is a two-dimensional object, such as a sphere or paraboloid. In differential geometry and topology, surfaces are described by two-dimensional 'patches' (or neighborhoods) that are assembled by diffeomorphisms or homeomorphisms, respectively. In algebraic geometry, surfaces are described by polynomial equations. Solids A solid is a three-dimensional object bounded by a closed surface; for example, a ball is the volume bounded by a sphere. Manifolds A manifold is a generalization of the concepts of curve and surface. In topology, a manifold is a topological space where every point has a neighborhood that is homeomorphic to Euclidean space. In differential geometry, a differentiable manifold is a space where each neighborhood is diffeomorphic to Euclidean space. Manifolds are used extensively in physics, including in general relativity and string theory. Angles Euclid defines a plane angle as the inclination to each other, in a plane, of two lines which meet each other, and do not lie straight with respect to each other. In modern terms, an angle is the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle. The size of an angle is formalized as an angular measure. In Euclidean geometry, angles are used to study polygons and triangles, as well as forming an object of study in their own right. The study of the angles of a triangle or of angles in a unit circle forms the basis of trigonometry. In differential geometry and calculus, the angles between plane curves or space curves or surfaces can be calculated using the derivative. Measures: length, area, and volume Length, area, and volume describe the size or extent of an object in one dimension, two dimension, and three dimensions respectively. In Euclidean geometry and analytic geometry, the length of a line segment can often be calculated by the Pythagorean theorem. Area and volume can be defined as fundamental quantities separate from length, or they can be described and calculated in terms of lengths in a plane or 3-dimensional space. Mathematicians have found many explicit formulas for area and formulas for volume of various geometric objects. In calculus, area and volume can be defined in terms of integrals, such as the Riemann integral or the Lebesgue integral. Other geometrical measures include the curvature and compactness. Metrics and measures The concept of length or distance can be generalized, leading to the idea of metrics. For instance, the Euclidean metric measures the distance between points in the Euclidean plane, while the hyperbolic metric measures the distance in the hyperbolic plane. Other important examples of metrics include the Lorentz metric of special relativity and the semi-Riemannian metrics of general relativity. In a different direction, the concepts of length, area and volume are extended by measure theory, which studies methods of assigning a size or measure to sets, where the measures follow rules similar to those of classical area and volume. Congruence and similarity Congruence and similarity are concepts that describe when two shapes have similar characteristics. In Euclidean geometry, similarity is used to describe objects that have the same shape, while congruence is used to describe objects that are the same in both size and shape. Hilbert, in his work on creating a more rigorous foundation for geometry, treated congruence as an undefined term whose properties are defined by axioms. Congruence and similarity are generalized in transformation geometry, which studies the properties of geometric objects that are preserved by different kinds of transformations. Compass and straightedge constructions Classical geometers paid special attention to constructing geometric objects that had been described in some other way. Classically, the only instruments used in most geometric constructions are the compass and straightedge. Also, every construction had to be complete in a finite number of steps. However, some problems turned out to be difficult or impossible to solve by these means alone, and ingenious constructions using neusis, parabolas and other curves, or mechanical devices, were found. Rotation and orientation The geometrical concepts of rotation and orientation define part of the placement of objects embedded in the plane or in space. Dimension Traditional geometry allowed dimensions 1 (a line or curve), 2 (a plane or surface), and 3 (our ambient world conceived of as three-dimensional space). Furthermore, mathematicians and physicists have used higher dimensions for nearly two centuries. One example of a mathematical use for higher dimensions is the configuration space of a physical system, which has a dimension equal to the system's degrees of freedom. For instance, the configuration of a screw can be described by five coordinates. In general topology, the concept of dimension has been extended from natural numbers, to infinite dimension (Hilbert spaces, for example) and positive real numbers (in fractal geometry). In algebraic geometry, the dimension of an algebraic variety has received a number of apparently different definitions, which are all equivalent in the most common cases. Symmetry The theme of symmetry in geometry is nearly as old as the science of geometry itself. Symmetric shapes such as the circle, regular polygons and platonic solids held deep significance for many ancient philosophers and were investigated in detail before the time of Euclid. Symmetric patterns occur in nature and were artistically rendered in a multitude of forms, including the graphics of Leonardo da Vinci, M. C. Escher, and others. In the second half of the 19th century, the relationship between symmetry and geometry came under intense scrutiny. Felix Klein's Erlangen program proclaimed that, in a very precise sense, symmetry, expressed via the notion of a transformation group, determines what geometry is. Symmetry in classical Euclidean geometry is represented by congruences and rigid motions, whereas in projective geometry an analogous role is played by collineations, geometric transformations that take straight lines into straight lines. However it was in the new geometries of Bolyai and Lobachevsky, Riemann, Clifford and Klein, and Sophus Lie that Klein's idea to 'define a geometry via its symmetry group' found its inspiration. Both discrete and continuous symmetries play prominent roles in geometry, the former in topology and geometric group theory, the latter in Lie theory and Riemannian geometry. A different type of symmetry is the principle of duality in projective geometry, among other fields. This meta-phenomenon can roughly be described as follows: in any theorem, exchange point with plane, join with meet, lies in with contains, and the result is an equally true theorem. A similar and closely related form of duality exists between a vector space and its dual space. Contemporary geometry Euclidean geometry Euclidean geometry is geometry in its classical sense. As it models the space of the physical world, it is used in many scientific areas, such as mechanics, astronomy, crystallography, and many technical fields, such as engineering, architecture, geodesy, aerodynamics, and navigation. The mandatory educational curriculum of the majority of nations includes the study of Euclidean concepts such as points, lines, planes, angles, triangles, congruence, similarity, solid figures, circles, and analytic geometry. Euclidean vectors Euclidean vectors are used for a myriad of applications in physics and engineering, such as position, displacement, deformation, velocity, acceleration, force, etc. Differential geometry Differential geometry uses techniques of calculus and linear algebra to study problems in geometry. It has applications in physics, econometrics, and bioinformatics, among others. In particular, differential geometry is of importance to mathematical physics due to Albert Einstein's general relativity postulation that the universe is curved. Differential geometry can either be intrinsic (meaning that the spaces it considers are smooth manifolds whose geometric structure is governed by a Riemannian metric, which determines how distances are measured near each point) or extrinsic (where the object under study is a part of some ambient flat Euclidean space). Non-Euclidean geometry Topology Topology is the field concerned with the properties of continuous mappings, and can be considered a generalization of Euclidean geometry. In practice, topology often means dealing with large-scale properties of spaces, such as connectedness and compactness. The field of topology, which saw massive development in the 20th century, is in a technical sense a type of transformation geometry, in which transformations are homeomorphisms. This has often been expressed in the form of the saying 'topology is rubber-sheet geometry'. Subfields of topology include geometric topology, differential topology, algebraic topology and general topology. Algebraic geometry Algebraic geometry is fundamentally the study by means of algebraic methods of some geometrical shapes, called algebraic sets, and defined as common zeros of multivariate polynomials. Algebraic geometry became an autonomous subfield of geometry , with a theorem called Hilbert's Nullstellensatz that establishes a strong correspondence between algebraic sets and ideals of polynomial rings. This led to a parallel development of algebraic geometry, and its algebraic counterpart, called commutative algebra. From the late 1950s through the mid-1970s algebraic geometry had undergone major foundational development, with the introduction by Alexander Grothendieck of scheme theory, which allows using topological methods, including cohomology theories in a purely algebraic context. Scheme theory allowed to solve many difficult problems not only in geometry, but also in number theory. Wiles' proof of Fermat's Last Theorem is a famous example of a long-standing problem of number theory whose solution uses scheme theory and its extensions such as stack theory. One of seven Millennium Prize problems, the Hodge conjecture, is a question in algebraic geometry. Algebraic geometry has applications in many areas, including cryptography and string theory. Complex geometry Complex geometry studies the nature of geometric structures modelled on, or arising out of, the complex plane. Complex geometry lies at the intersection of differential geometry, algebraic geometry, and analysis of several complex variables, and has found applications to string theory and mirror symmetry. Complex geometry first appeared as a distinct area of study in the work of Bernhard Riemann in his study of Riemann surfaces. Work in the spirit of Riemann was carried out by the Italian school of algebraic geometry in the early 1900s. Contemporary treatment of complex geometry began with the work of Jean-Pierre Serre, who introduced the concept of sheaves to the subject, and illuminated the relations between complex geometry and algebraic geometry. The primary objects of study in complex geometry are complex manifolds, complex algebraic varieties, and complex analytic varieties, and holomorphic vector bundles and coherent sheaves over these spaces. Special examples of spaces studied in complex geometry include Riemann surfaces, and Calabi–Yau manifolds, and these spaces find uses in string theory. In particular, worldsheets of strings are modelled by Riemann surfaces, and superstring theory predicts that the extra 6 dimensions of 10 dimensional spacetime may be modelled by Calabi–Yau manifolds. Discrete geometry Discrete geometry is a subject that has close connections with convex geometry. It is concerned mainly with questions of relative position of simple geometric objects, such as points, lines and circles. Examples include the study of sphere packings, triangulations, the Kneser-Poulsen conjecture, etc. It shares many methods and principles with combinatorics. Computational geometry Computational geometry deals with algorithms and their implementations for manipulating geometrical objects. Important problems historically have included the travelling salesman problem, minimum spanning trees, hidden-line removal, and linear programming. Although being a young area of geometry, it has many applications in computer vision, image processing, computer-aided design, medical imaging, etc. Geometric group theory Geometric group theory studies group actions on objects that are regarded as geometric (significantly, isometric actions on metric spaces) to study finitely generated groups, often involving large-scale geometric techniques. It is closely connected to low-dimensional topology, such as in Grigori Perelman's proof of the Geometrization conjecture, which included the proof of the Poincaré conjecture, a Millennium Prize Problem. Group actions on their Cayley graphs are foundational examples of isometric group actions. Other major topics include quasi-isometries, Gromov-hyperbolic groups and their generalizations (relatively and acylindrically hyperbolic groups), free groups and their automorphisms, groups acting on trees, various notions of nonpositive curvature for groups (CAT(0) groups, Dehn functions, automaticity...), right angled Artin groups, and topics close to combinatorial group theory such as small cancellation theory and algorithmic problems (e.g. the word, conjugacy, and isomorphism problems). Other group-theoretic topics like mapping class groups, property (T), solvability, amenability and lattices in Lie groups are sometimes regarded as strongly geometric as well. Convex geometry Convex geometry investigates convex shapes in the Euclidean space and its more abstract analogues, often using techniques of real analysis and discrete mathematics. It has close connections to convex analysis, optimization and functional analysis and important applications in number theory. Convex geometry dates back to antiquity. Archimedes gave the first known precise definition of convexity. The isoperimetric problem, a recurring concept in convex geometry, was studied by the Greeks as well, including Zenodorus. Archimedes, Plato, Euclid, and later Kepler and Coxeter all studied convex polytopes and their properties. From the 19th century on, mathematicians have studied other areas of convex mathematics, including higher-dimensional polytopes, volume and surface area of convex bodies, Gaussian curvature, algorithms, tilings and lattices. Applications Geometry has found applications in many fields, some of which are described below. Art Mathematics and art are related in a variety of ways. For instance, the theory of perspective showed that there is more to geometry than just the metric properties of figures: perspective is the origin of projective geometry. Artists have long used concepts of proportion in design. Vitruvius developed a complicated theory of ideal proportions for the human figure. These concepts have been used and adapted by artists from Michelangelo to modern comic book artists. The golden ratio is a particular proportion that has had a controversial role in art. Often claimed to be the most aesthetically pleasing ratio of lengths, it is frequently stated to be incorporated into famous works of art, though the most reliable and unambiguous examples were made deliberately by artists aware of this legend. Tilings, or tessellations, have been used in art throughout history. Islamic art makes frequent use of tessellations, as did the art of M. C. Escher. Escher's work also made use of hyperbolic geometry. Cézanne advanced the theory that all images can be built up from the sphere, the cone, and the cylinder. This is still used in art theory today, although the exact list of shapes varies from author to author. Architecture Geometry has many applications in architecture. In fact, it has been said that geometry lies at the core of architectural design. Applications of geometry to architecture include the use of projective geometry to create forced perspective, the use of conic sections in constructing domes and similar objects, the use of tessellations, and the use of symmetry. Physics The field of astronomy, especially as it relates to mapping the positions of stars and planets on the celestial sphere and describing the relationship between movements of celestial bodies, have served as an important source of geometric problems throughout history. Riemannian geometry and pseudo-Riemannian geometry are used in general relativity. String theory makes use of several variants of geometry, as does quantum information theory. Other fields of mathematics Calculus was strongly influenced by geometry. For instance, the introduction of coordinates by René Descartes and the concurrent developments of algebra marked a new stage for geometry, since geometric figures such as plane curves could now be represented analytically in the form of functions and equations. This played a key role in the emergence of infinitesimal calculus in the 17th century. Analytic geometry continues to be a mainstay of pre-calculus and calculus curriculum. Another important area of application is number theory. In ancient Greece the Pythagoreans considered the role of numbers in geometry. However, the discovery of incommensurable lengths contradicted their philosophical views. Since the 19th century, geometry has been used for solving problems in number theory, for example through the geometry of numbers or, more recently, scheme theory, which is used in Wiles's proof of Fermat's Last Theorem. See also Lists List of geometers :Category:Algebraic geometers :Category:Differential geometers :Category:Geometers :Category:Topologists List of formulas in elementary geometry List of geometry topics List of important publications in geometry Lists of mathematics topics Related topics Descriptive geometry Flatland, a book written by Edwin Abbott Abbott about two- and three-dimensional space, to understand the concept of four dimensions List of interactive geometry software Other applications Molecular geometry Notes References Sources Further reading External links A geometry course from Wikiversity Unusual Geometry Problems The Math Forum – Geometry The Math Forum – K–12 Geometry The Math Forum – College Geometry The Math Forum – Advanced Geometry Nature Precedings – Pegs and Ropes Geometry at Stonehenge The Mathematical Atlas – Geometric Areas of Mathematics "4000 Years of Geometry", lecture by Robin Wilson given at Gresham College, 3 October 2007 (available for MP3 and MP4 download as well as a text file) Finitism in Geometry at the Stanford Encyclopedia of Philosophy The Geometry Junkyard Interactive geometry reference with hundreds of applets Dynamic Geometry Sketches (with some Student Explorations) Geometry classes at Khan Academy
Geometry
[ "Mathematics" ]
6,194
[ "Geometry" ]
18,974,919
https://en.wikipedia.org/wiki/Organoyttrium%20chemistry
Organoyttrium chemistry is the study of compounds containing carbon-yttrium bonds. These compounds are almost invariably formal Y3+ derivatives, are generally diamagnetic and colorless, a consequence of the closed-shell configuration of the trication. Organoyttrium compounds are mainly of academic interest. Organoytrium compounds are often prepared by alkylation of . References Yttrium compounds Organometallic compounds
Organoyttrium chemistry
[ "Chemistry" ]
89
[ "Organic compounds", "Organometallic compounds", "Organometallic chemistry", "Inorganic compounds" ]
79,610
https://en.wikipedia.org/wiki/Aerospace%20manufacturer
An aerospace manufacturer is a company or individual involved in the various aspects of designing, building, testing, selling, and maintaining aircraft, aircraft parts, missiles, rockets, or spacecraft. Aerospace is a high technology industry. The aircraft industry is the industry supporting aviation by building aircraft and manufacturing aircraft parts for their maintenance. This includes aircraft and parts used for civil aviation and military aviation. Most production is done pursuant to type certificates and Defense Standards issued by a government body. This term has been largely subsumed by the more encompassing term: "aerospace industry". Market In 2015 the aircraft production was worth US$180.3 billion: 61% airliners, 14% business and general aviation, 12% military aircraft, 10% military rotary wing and 3% civil rotary wing; while their MRO was worth $135.1 Bn or $ Bn combined. The global aerospace industry was worth $838.5 billion in 2017: aircraft & engine OEMs represented 28% ($ Bn), civil & military MRO & upgrades 27% ($ Bn), aircraft systems & component manufacturing 26% ($ Bn), satellites & space 7% ($ Bn), missiles & UAVs 5% ($ Bn) and other activity, including flight simulators, defense electronics, public research accounted for 7% ($ Bn). The Top 10 countries with the largest industrial bases in 2017 were the United States with $408.4 billion (representing % of the whole), followed by France with $69 billion (%), then China with $61.2 billion (%), the United Kingdom with $48.8 billion (%), Germany with $46.2 billion (%), Russia with $27.1 billion (%), Canada with $24 billion (%), Japan with $21 billion (%), Spain with $14 billion (%) and India with $11 billion (%). These ten countries represent $731 billion or % of the whole industry. In 2018, the new commercial aircraft value is projected for $270.4 billion while business aircraft will amount for $18 billion and civil helicopters for $4 billion. Largest aerospace companies Geography In September 2018, PwC ranked aerospace manufacturing attractiveness: the most attractive country was the United States, with $240 billion in sales in 2017, due to the sheer size of the industry (#1) and educated workforce (#1), low geopolitical risk (#4, #1 is Japan), strong transportation infrastructure (#5, #1 is Hong Kong), a healthy economy (#10, #1 is China), but high costs (#7, #1 is Denmark) and average tax policy (#36, #1 is Qatar). Following were Canada, Singapore, Switzerland and United Kingdom. Within the US, the most attractive was Washington state, due to the best Industry (#1), leading Infrastructure (#4, New Jersey is #1) and Economy (#4, Texas is #1), good labor (#9, Massachusetts is #1), average tax policy (#17, Alaska is #1) but is costly (#33, Montana is #1). Washington is tied to Boeing Commercial Airplanes, earning $10.3 billion, is home to 1,400 aerospace-related businesses, and has the highest aerospace jobs concentration. Following are Texas, Georgia, Arizona and Colorado. In the US, the Department of Defense and NASA are the two biggest consumers of aerospace technology and products. The Bureau of Labor Statistics of the United States reported that the aerospace industry employed 444,000 wage and salary jobs in 2004, many of which were in Washington and California, this marked a steep decline from the peak years during the Reagan Administration when total employment exceeded 1,000,000 aerospace industry workers. During that period of recovery a special program to restore U.S. competitiveness across all U.S. industries, Project Socrates, contributed to employment growth as the U.S. aerospace industry captured 72 percent of world aerospace market. By 1999 U.S. share of the world market fell to 52 percent. In the European Union, aerospace companies such as Airbus, Safran, BAE Systems, Thales, Dassault, Saab AB, Terma A/S, Patria Plc and Leonardo are participants in the global aerospace industry and research effort. In Russia, large aerospace companies like Oboronprom and the United Aircraft Corporation (encompassing Mikoyan, Sukhoi, Ilyushin, Tupolev, Yakovlev, and Irkut, which includes Beriev) are among the major global players in this industry. Cities Important locations of the civil aerospace industry worldwide include Seattle, Wichita, Kansas, Dayton, Ohio and St. Louis in the United States (Boeing), Montreal and Toronto in Canada (Bombardier, Pratt & Whitney Canada), Toulouse and Bordeaux in France (Airbus, Dassault, ATR), Seville in Spain and Hamburg in Germany (Airbus), the North-West of England and Bristol in Britain (Airbus and AgustaWestland), Komsomolsk-on-Amur and Irkutsk in Russia (Sukhoi, Beriev), Kyiv and Kharkiv in Ukraine (Antonov), Nagoya in Japan (Mitsubishi Heavy Industries Aerospace and Kawasaki Heavy Industries Aerospace), as well as São José dos Campos in Brazil where Embraer is based. Consolidation Several consolidations took place in the aerospace and defense industries over the last few decades. Airbus prominently illustrated the European airliner manufacturing consolidation in the late 1960s. Between 1988 and 2010, more than 5,452 mergers and acquisitions with a total known-value of US$579 billion were announced worldwide. In 1993, then United States Secretary of Defense Les Aspin and his deputy William J. Perry held the "Last Supper" at the Pentagon with contractors executives who were told that there were twice as many military suppliers as he wanted to see: $55 billion in military–industry mergers took place from 1992 to 1997, leaving mainly Boeing, Lockheed Martin, Northrop Grumman and Raytheon. Boeing bought McDonnell Douglas for US$13.3 billion in 1996. Raytheon acquired Hughes Aircraft Company for $9.5 billion in 1997. BAE Systems is the successor company to numerous British aircraft manufacturers which merged throughout the second half of the 20th century. Many of these mergers followed the 1957 Defence White Paper. Marconi Electronic Systems, a subsidiary of the General Electric Company plc, was acquired by British Aerospace for US$12.3 billion in 1999 merger, to form BAE Systems. In 2002, when Fairchild Dornier was bankrupt, Airbus, Boeing or Bombardier declined to take the 728JET/928JET large regional jet program as mainline and regional aircraft manufacturers were split and Airbus was digesting its ill-fated Fokker acquisition a decade earlier. On September 4, 2017, United Technologies acquired Rockwell Collins in cash and stock for $23 billion, $30 billion including Rockwell Collins' net debt, for $500+ million of synergies expected by year four. The Oct. 16, 2017 announcement of the CSeries partnership between Airbus and Bombardier Aerospace could trigger a daisy chain of reactions towards a new order. Airbus gets a new, efficient model at the lower end of the narrowbody market which provides the bulk of airliner profits and can abandon the slow selling A319 while Bombardier benefits from the growth in this expanded market even if it holds a smaller residual stake. Boeing could forge a similar alliance with either Embraer with its E-jet E2 or Mitsubishi Heavy Industries and its MRJ. On 21 December, Boeing and Embraer confirmed to be discussing a potential combination with a transaction subject to Brazilian government regulators, the companies' boards and shareholders approvals. The weight of Airbus and Boeing could help E2 and CSeries sales but the 100-150 seats market seems slow. As the CSeries, renamed A220, and E-jet E2 are more capable than their predecessors, they moved closer to the lower end of the narrowbodies. In 2018, the four Western airframers combined into two within nine months as Boeing acquired 80% of Embraer's airliners for $3.8 billion on July 5. On April 3, 2020, Raytheon and United Technologies Corporation (except Otis Worldwide, leaving Rockwell Collins and engine maker Pratt and Whitney) merged to form Raytheon Technologies Corporation, with combined sales of $79 billion in 2019. The most prominent unions between 1995 and 2020 include those of Boeing and McDonnell Douglas; the French, German and Spanish parts of EADS; and United Technologies with Rockwell Collins then Raytheon, but many mergers projects did not went through: Textron-Bombardier, EADS-BAE Systems, Hawker Beechcraft-Superior Aviation, GE-Honeywell, BAE Systems-Boeing (or Lockheed Martin), Dassault-Aerospatiale, Safran-Thales, BAE Systems-Rolls-Royce or Lockheed Martin–Northrop Grumman. Suppliers The largest aerospace suppliers are United Technologies with $28.2 billion of revenue, followed by GE Aviation with $24.7 billion, Safran with $22.5 billion, Rolls-Royce Holdings with $16.9 billion, Honeywell Aerospace with $15.2 billion and Rockwell Collins including B/E Aerospace with $8.1 billion. Electric aircraft development could generate large changes for the aerospace suppliers. On 26 November 2018, United Technologies announced the completion of its Rockwell Collins acquisition, renaming systems supplier UTC Aerospace Systems as Collins Aerospace, for $23 billion of sales in 2017 and 70,000 employees, and $39.0 billion of sales in 2017 combined with engine manufacturer Pratt & Whitney. Supply chain Before the 1980s/1990s, aircraft and aeroengine manufacturers were vertically integrated. Then Douglas aircraft outsourced large aerostructures and the Bombardier Global Express pioneered the "Tier 1" supply chain model inspired by automotive industry, with 10-12 risk-sharing limited partners funding around half of the development costs. The Embraer E-Jet followed in the late 1990s with fewer than 40 primary suppliers. Tier 1 suppliers were led by Honeywell, Safran, Goodrich Corporation and Hamilton Sundstrand. In the 2000s, Rolls-Royce reduced its supplier count after bringing in automotive supply chain executives. On the Airbus A380, less than 100 major suppliers outsource 60% of its value, even 80% on the A350. Boeing embraced an aggressive Tier 1 model for the 787 but with its difficulties began to question why it was earning lower margins than its suppliers while it seemed to take all the risk, ensuing its 2011 Partnering for Success initiative, as Airbus initiated its own Scope+ initiative for the A320. Tier 1 consolidation also affects engine manufacturers : GE Aerospace acquired Avio in 2013 and Rolls-Royce took control of ITP Aero. See also Aerospace Aerospace industry in the United Kingdom Military–industrial complex Aircraft industry Aircraft industry of Russia Aircraft parts industry List of aircraft manufacturers Space industry Space industry of India Space industry of Russia Commercial Spaceflight Federation (US) List of spacecraft manufacturers Supplier-furnished equipment References Further reading Hartley, Keith. The Political Economy Of Aerospace Industries: A Key Driver of Growth and International Competitiveness? (Edward Elgar, 2014); 288 pages; the industry in Britain, continental Europe, and the US with a case study of BAE Systems. Newhouse, John. The Sporty Game: The High-Risk Competitive Business of Making and Selling Commercial Airliners. New York: Alfred A. Knopf, 1982. . Wills, Jocelyn. Tug of War: Surveillance Capitalism, Military Contracting, and the Rise of the Security State'' (McGill-Queen's University Press, 2017), scholarly history of MDA in Canada. online book review External links Aerospace Craft & Structural Components Manufacturer Manufacturer Aircraft industry Aerospace
Aerospace manufacturer
[ "Physics", "Engineering" ]
2,440
[ "Spacetime", "Space", "Aerospace", "Aerospace engineering" ]
80,322
https://en.wikipedia.org/wiki/Lev%20Landau
Lev Davidovich Landau (; 22 January 1908 – 1 April 1968) was a Soviet physicist who made fundamental contributions to many areas of theoretical physics. He was considered as one of the last scientists who were universally well-versed and made seminal contributions to all branches of physics. He is credited with laying the foundations of twentieth century condensed matter physics, and is also considered arguably the greatest Soviet theoretical physicist. His accomplishments include the independent co-discovery of the density matrix method in quantum mechanics (alongside John von Neumann), the quantum mechanical theory of diamagnetism, the theory of superfluidity, the theory of second-order phase transitions, invention of order parameter technique, the Ginzburg–Landau theory of superconductivity, the theory of Fermi liquids, the explanation of Landau damping in plasma physics, the Landau pole in quantum electrodynamics, the two-component theory of neutrinos, and Landau's equations for S-matrix singularities. He received the 1962 Nobel Prize in Physics for his development of a mathematical theory of superfluidity that accounts for the properties of liquid helium II at a temperature below (). Life Early years Landau was born on 22 January 1908 to Jewish parents in Baku, the Russian Empire, in what is now Azerbaijan. Landau's father, David Lvovich Landau, was an engineer with the local oil industry, and his mother, Lyubov Veniaminovna Garkavi-Landau, was a doctor. Both came to Baku from Mogilev and both graduated the Mogilev gymnasium. He learned differential calculus at age 12 and integral calculus at age 13. Landau graduated in 1920 at age 13 from gymnasium. His parents considered him too young to attend university, so for a year he attended the Baku Economical Technical School. In 1922, at age 14, he matriculated at the Baku State University, studying in two departments simultaneously: the Departments of Physics and Mathematics, and the Department of Chemistry. Subsequently, he ceased studying chemistry, but remained interested in the field throughout his life. Leningrad and Europe In 1924, he moved to the main centre of Soviet physics at the time: the Physics Department of Leningrad State University, where he dedicated himself to the study of theoretical physics, graduating in 1927. Landau subsequently enrolled for post-graduate studies at the Leningrad Physico-Technical Institute where he eventually received a doctorate in Physical and Mathematical Sciences in 1934. Landau got his first chance to travel abroad during the period 1929–1931, on a Soviet government—People's Commissariat for Education—travelling fellowship supplemented by a Rockefeller Foundation fellowship. By that time he was fluent in German and French and could communicate in English. He later improved his English and learned Danish. After brief stays in Göttingen and Leipzig, he went to Copenhagen on 8 April 1930 to work at the Niels Bohr's Institute for Theoretical Physics. He stayed there until 3 May of the same year. After the visit, Landau always considered himself a pupil of Niels Bohr and Landau's approach to physics was greatly influenced by Bohr. After his stay in Copenhagen, he visited Cambridge (mid-1930), where he worked with Paul Dirac, Copenhagen (September to November 1930), and Zürich (December 1930 to January 1931), where he worked with Wolfgang Pauli. From Zürich Landau went back to Copenhagen for the third time and stayed there from 25 February until 19 March 1931 before returning to Leningrad the same year. National Scientific Center Kharkiv Institute of Physics and Technology, Kharkiv Between 1932 and 1937, Landau headed the Department of Theoretical Physics at the National Scientific Center Kharkiv Institute of Physics and Technology, and he lectured at the University of Kharkiv and the Kharkiv Polytechnic Institute. Apart from his theoretical accomplishments, Landau was the principal founder of a great tradition of theoretical physics in Kharkiv, Ukraine, sometimes referred to as the "Landau school". In Kharkiv, he and his friend and former student, Evgeny Lifshitz, began writing the Course of Theoretical Physics, ten volumes that together span the whole of the subject and are still widely used as graduate-level physics texts. During the Great Purge, Landau was investigated within the UPTI Affair in Kharkiv, but he managed to leave for Moscow to take up a new post. Landau developed a famous comprehensive exam called the "Theoretical Minimum" which students were expected to pass before admission to the school. The exam covered all aspects of theoretical physics, and between 1934 and 1961 only 43 candidates passed, but those who did later became quite notable theoretical physicists. In 1932, Landau computed the Chandrasekhar limit; however, he did not apply it to white dwarf stars. Institute for Physical Problems, Moscow From 1937 until 1962, Landau was the head of the Theoretical Division at the Institute for Physical Problems. On 27 April 1938, Landau was arrested for a leaflet which compared Stalinism to German Nazism and Italian Fascism. He was held in the NKVD's Lubyanka prison until his release, on 29 April 1939, after Pyotr Kapitsa (an experimental low-temperature physicist and the founder and head of the institute) and Bohr wrote letters to Joseph Stalin. Kapitsa personally vouched for Landau's behaviour and threatened to quit the institute if Landau was not released. After his release, Landau discovered how to explain Kapitsa's superfluidity using sound waves, or phonons, and a new excitation called a roton. Landau led a team of mathematicians supporting Soviet atomic and hydrogen bomb development. He calculated the dynamics of the first Soviet thermonuclear bomb, including predicting the yield. For this work Landau received the Stalin Prize in 1949 and 1953, and was awarded the title "Hero of Socialist Labour" in 1954. Landau's students included Lev Pitaevskii, Alexei Abrikosov, Aleksandr Akhiezer, Igor Dzyaloshinskii, Evgeny Lifshitz, Lev Gor'kov, Isaak Khalatnikov, Roald Sagdeev and Isaak Pomeranchuk. Scientific achievements Landau's accomplishments include the independent co-discovery of the density matrix method in quantum mechanics (alongside John von Neumann), the quantum mechanical theory of diamagnetism, the theory of superfluidity, the theory of second-order phase transitions, the Ginzburg–Landau theory of superconductivity, the theory of Fermi liquids, the explanation of Landau damping in plasma physics, the Landau pole in quantum electrodynamics, the two-component theory of neutrinos, the explanation of flame instability (the Darrieus-Landau instability), and Landau's equations for S matrix singularities. Landau received the 1962 Nobel Prize in Physics for his development of a mathematical theory of superfluidity that accounts for the properties of liquid helium II at a temperature below 2.17 K (−270.98 °C)." Personal life and views In 1937, Landau married Kora T. Drobanzeva from Kharkiv. Their son Igor (1946–2011) became a theoretical physicist. Lev Landau believed in "free love" rather than monogamy and encouraged his wife and his students to practise "free love". However, his wife was not enthusiastic. Landau is generally described as an atheist, although when Soviet filmmaker Andrei Tarkovsky asked Landau whether he believed in the existence of God, Landau pondered the matter in silence for three minutes before responding, "I think so." In 1957, a lengthy report to the CPSU Central Committee by the KGB recorded Landau's views on the 1956 Hungarian Uprising, Vladimir Lenin and what he termed "red fascism". Hendrik Casimir recalls him as a passionate communist, emboldened by his revolutionary ideology. Landau's drive in establishing Soviet science was in part due to his devotion to socialism. In 1935 he published a piece titled “Bourgeoisie and Contemporary Physics” in the Soviet newspaper Izvestia in which he criticized religious superstition and the dominance of capital, which he saw as bourgeois tendencies, citing “unprecedented opportunities for the development of physics in our country, provided by the Party and the government.” Last years On 7 January 1962, Landau's car collided with an oncoming truck. He was severely injured and spent two months in a coma. Although Landau recovered in many ways, his scientific creativity was destroyed, and he never returned fully to scientific work. His injuries prevented him from accepting the 1962 Nobel Prize in Physics in person. Throughout his life Landau was known for his sharp humour, as illustrated by the following dialogue with a psychologist, Alexander Luria, who tried to test for possible brain damage while Landau was recovering from the car crash: Luria: "Please draw me a circle" Landau draws a cross Luria: "Hm, now draw me a cross" Landau draws a circle Luria: "Landau, why don't you do what I ask?" Landau: "If I did, you might come to think I've become mentally retarded". In 1965 former students and co-workers of Landau founded the Landau Institute for Theoretical Physics, located in the town of Chernogolovka near Moscow, and led for the following three decades by Isaak Khalatnikov. In June 1965, Lev Landau and Evsei Liberman published a letter in the New York Times, stating that as Soviet Jews they opposed U.S. intervention on behalf of the Student Struggle for Soviet Jewry. However, there are doubts that Landau authored this letter. Death Landau died on 1 April 1968, aged 60, from complications of the injuries sustained in the car accident six years earlier. He was buried at the Novodevichy Cemetery. Fields of contribution DLVO theory Fermi liquid theory Quasiparticle Ivanenko–Landau–Kähler equation Landau damping Landau distribution Landau gauge Landau kinetic equation Landau pole Landau susceptibility Landau potential Landau quantization Landau theory Landau–Squire jet Landau–Levich problem Landau–Hopf theory of turbulence Ginzburg–Landau theory Darrieus–Landau instability Landau–Lifshitz aeroacoustic equation Landau–Raychaudhuri equation Landau–Zener formula Landau–Lifshitz model Landau–Lifshitz pseudotensor Landau–Lifshitz–Gilbert equation Landau–Pomeranchuk–Migdal effect Landau–Yang theorem Landau principle Stuart–Landau equation Superfluidity Superconductivity Pedagogy Course of Theoretical Physics Legacy Two celestial objects are named in his honour: the minor planet 2142 Landau. the lunar crater Landau. The highest prize in theoretical physics awarded by the Russian Academy of Sciences is named in his honour: Landau Gold Medal On 22 January 2019, Google celebrated what would have been Landau's 111th birthday with a Google Doodle. The Landau-Spitzer Award (American Physical Society), which recognizes outstanding contributions to plasma physics and European-United States collaboration, is named in-part in his honor. Landau's ranking of physicists Landau kept a list of names of physicists which he ranked on a logarithmic scale of productivity and genius, such as creativity and innate talent, ranging from 0 to 5. The highest ranking, 0, was assigned to Isaac Newton. Albert Einstein was ranked 0.5. A rank of 1 was awarded to the founding fathers of quantum mechanics, Niels Bohr, Werner Heisenberg, Satyendra Nath Bose, Paul Dirac and Erwin Schrödinger, and others, while members of rank of 5 were deemed "pathologists". Landau ranked himself as a 2.5 but later promoted to a 2. N. David Mermin, writing about Landau, referred to the scale, and ranked himself in the fourth division, in the article "My Life with Landau: Homage of a 4.5 to a 2". Landau had a lesser known scale to measure the genius of a scientist using diagrams instead. He had four classes of diagrams, with the first being a simple triangle, which included those who were the most original and brilliant, such as Dirac and Einstein. The diagrams were formed by two parallel lines, bottom representing tenacity, while the top measured genius and originality. In popular culture The Russian television film My Husband — the Genius (translation of the Russian title Мой муж — гений) released in 2008 tells the biography of Landau (played by Daniil Spivakovsky), mostly focusing on his private life. It was generally panned by critics. People who had personally met Landau, including famous Russian scientist Vitaly Ginzburg, said that the film was not only terrible, but also false in historical facts. Another film about Landau, Dau, is directed by Ilya Khrzhanovsky with non-professional actor Teodor Currentzis (an orchestra conductor) as Landau. Dau was a common nickname of Lev Landau. The film was part of the multidisciplinary art project DAU. Works Landau wrote his first paper On the derivation of Klein–Fock equation, co-authored with Dmitri Ivanenko in 1926, when he was 18 years old. His last paper titled Fundamental problems appeared in 1960 in an edited version of tributes to Wolfgang Pauli. A complete list of Landau's works appeared in 1998 in the Russian journal Physics-Uspekhi. Landau would allow himself to be listed as a co-author of a journal article on two conditions: 1) he brought up the idea of the work, partly or entirely, and 2) he performed at least some calculations presented in the article. Consequently, he removed his name from numerous publications of his students where his contribution was less significant. Course of Theoretical Physics — 2nd ed. (1965) at archive.org Landau and Lifshitz suggested in the third volume of the Course of Theoretical Physics that the then-standard periodic table had a mistake in it, and that lutetium should be regarded as a d-block rather than an f-block element. Their suggestion was fully vindicated by later findings, and in 1988 was endorsed by a report of the International Union of Pure and Applied Chemistry (IUPAC). Other in 4 volumes: volume 1 Physical bodies ; volume 2 Molecules ; volume 3 Electrons and volume 4 Photons and nuclei; vols. 3 & 4 by Kitaigorodsky alone See also List of Jewish Nobel laureates List of things named after Lev Landau References Further reading Books (After Landau's 1962 car accident, the physics community around him rallied to attempt to save his life. They managed to prolong his life until 1968.) Articles Karl Hufbauer, "Landau's youthful sallies into stellar theory: Their origins, claims, and receptions", Historical Studies in the Physical and Biological Sciences, 37 (2007), 337–354. "As a student, Landau dared to correct Einstein in a lecture". Global Talent News. Lev Davidovich Landau. Nobel-Winners. Landau's Theoretical Minimum, Landau's Seminar, ITEP in the Beginning of the 1950s by Boris L. Ioffe, Concluding talk at the workshop QCD at the Threshold of the Fourth Decade/Ioeffest. EJTP Landau Issue 2008. Ammar Sakaji and Ignazio Licata (eds), Lev Davidovich Landau and his Impact on Contemporary Theoretical Physics, Nova Science Publishers, New York, 2009, . Gennady Gorelik, "The Top Secret Life of Lev Landau", Scientific American, Aug. 1997, vol. 277(2), 53–57, JSTOR link. Maya Bessarab, "Landau's Life Pages(in Russian)". External links Lev Landau 1908 births 1968 deaths Soviet Nobel laureates Azerbaijani Jews Scientists from Baku Burials at Novodevichy Cemetery Fluid dynamicists Foreign associates of the National Academy of Sciences Foreign members of the Royal Society Full Members of the USSR Academy of Sciences Nobel laureates in Physics Heroes of Socialist Labour Recipients of the Stalin Prize Recipients of the Lenin Prize Recipients of the Order of the Badge of Honour Recipients of the Order of Lenin Recipients of the Order of the Red Banner of Labour Winners of the Max Planck Medal Jewish atheists Jewish physicists Members of the German National Academy of Sciences Leopoldina Academic staff of Moscow State University Academic staff of the Moscow Institute of Physics and Technology People from Baku Governorate Saint Petersburg State University alumni Soviet atheists Soviet inventors Soviet Jews Soviet physicists Theoretical physicists Academic staff of the National University of Kharkiv Superfluidity People involved with the periodic table Russian scientists
Lev Landau
[ "Physics", "Chemistry", "Materials_science" ]
3,416
[ "Periodic table", "People involved with the periodic table", "Physical phenomena", "Phase transitions", "Theoretical physics", "Phases of matter", "Superfluidity", "Fluid dynamicists", "Condensed matter physics", "Exotic matter", "Theoretical physicists", "Matter", "Fluid dynamics" ]
1,114,239
https://en.wikipedia.org/wiki/Arsenic%20trisulfide
Arsenic trisulfide is the inorganic compound with the formula . It is a dark yellow solid that is insoluble in water. It also occurs as the mineral orpiment (Latin: auripigmentum), which has been used as a pigment called King's yellow. It is produced in the analysis of arsenic compounds. It is a group V/VI, intrinsic p-type semiconductor and exhibits photo-induced phase-change properties. Structure occurs both in crystalline and amorphous forms. Both forms feature polymeric structures consisting of trigonal pyramidal As(III) centres linked by sulfide centres. The sulfide centres are two-fold coordinated to two arsenic atoms. In the crystalline form, the compound adopts a ruffled sheet structure. The bonding between the sheets consists of van der Waals forces. The crystalline form is usually found in geological samples. Amorphous does not possess a layered structure but is more highly cross-linked. Like other glasses, there is no medium or long-range order, but the first co-ordination sphere is well defined. is a good glass former and exhibits a wide glass-forming region in its phase diagram. Properties It is a semiconductor, with a direct band gap of 2.7 eV. The wide band gap makes it transparent to infrared light between 620 nm and 11 μm. Synthesis From the elements Amorphous is obtained via the fusion of the elements at 390 °C. Rapid cooling of the reaction melt gives a glass. The reaction can be represented with the chemical equation: Aqueous precipitation forms when aqueous solutions containing As(III) are treated with . Arsenic was in the past analyzed and assayed by this reaction, which results in the precipitation of , which is then weighed. can even be precipitated in 6 M HCl. is so insoluble that it is not toxic. Reactions Upon heating in a vacuum, polymeric "cracks" to give a mixture of molecular species, including molecular . adopts the adamantane geometry, like that observed for and . When a film of this material is exposed to an external energy source such as thermal energy (via thermal annealing ), electromagnetic radiation (i.e. UV lamps, lasers, electron beams)), As4S6 polymerizes: characteristically dissolves upon treatment with aqueous solutions containing sulfide ions. The dissolved arsenic species is the pyramidal trithioarsenite anion : is the anhydride of the hypothetical trithioarsenous acid, . Upon treatment with polysulfide ions, dissolves to give a variety of species containing both S–S and As–S bonds. One derivative is , an eight-membered ring that contains 7 S atoms and 1 As atom, and an exocyclic sulfido center attached to the As atom. also dissolves in strongly alkaline solutions to give a mixture of and . "Roasting" in air gives volatile, toxic derivatives, this conversion being one of the hazards associated with the refining of heavy metal ores: Contemporary uses As an inorganic photoresist Due to its high refractive index of 2.45 and its large Knoop hardness compared to organic photoresists, has been investigated for the fabrication of photonic crystals with a full-photonic band-gap. Advances in laser patterning techniques such as three-dimensional direct laser writing (3-D DLW) and chemical wet-etching chemistry, has allowed this material to be used as a photoresist to fabricate 3-D nanostructures. has been investigated for use as a high resolution photoresist material since the early 1970s, using aqueous etchants. Although these aqueous etchants allowed for low-aspect ratio 2-D structures to be fabricated, they do not allow for the etching of high aspect ratio structures with 3-D periodicity. Certain organic reagents, used in organic solvents, permit the high-etch selectivity required to produce high-aspect ratio structures with 3-D periodicity. Medical applications and have been investigated as treatments for acute promyelocytic leukemia (APL). For IR-transmitting glasses Arsenic trisulfide manufactured into amorphous form is used as a chalcogenide glass for infrared optics. It is transparent for light between wavelengths of 620 nm and 11 μm. The arsenic trisulfide glass is more resistant to oxidation than crystalline arsenic trisulfide, which minimizes toxicity concerns. It can be also used as an acousto-optic material. Arsenic trisulfide was used for the distinctive eight-sided conical nose over the infra-red seeker of the de Havilland Firestreak missile. Role in ancient artistry The ancient Egyptians reportedly used orpiment, natural or synthetic, as a pigment in artistry and cosmetics. Miscellaneous Arsenic trisulfide is also used as a tanning agent. It was formerly used with indigo dye for the production of pencil blue, which allowed dark blue hues to be added to fabric via pencil or brush. Precipitation of arsenic trisulfide is used as an analytical test for presence of dissimilatory arsenic-reducing bacteria (DARB). Safety is so insoluble that its toxicity is low. Aged samples can contain substantial amounts of arsenic oxides, which are soluble and therefore highly toxic. Natural occurrence Orpiment is found in volcanic environments, often together with other arsenic sulfides, mainly realgar. It is sometimes found in low-temperature hydrothermal veins, together with some other sulfide and sulfosalt minerals. References Further reading . "Arsenic Compounds, Inorganic", Report on Carcinogens, Eleventh Edition (PDF), U.S. Department of Health and Human Services, Public Health Service, National Toxicology Program, 2005. External links UK Poison Information Document Poisons Information Monographs WHO Food Additives Series 24 JECFA Evaluation: Arsenic Arsenic trisulfide Arsenic(III) compounds Sesquisulfides Optical materials Non-oxide glasses
Arsenic trisulfide
[ "Physics" ]
1,237
[ "Materials", "Optical materials", "Matter" ]
1,114,402
https://en.wikipedia.org/wiki/Azo%20compound
Azo compounds are organic compounds bearing the functional group diazenyl (, in which R and R′ can be either aryl or alkyl groups). IUPAC defines azo compounds as: "Derivatives of diazene (diimide), , wherein both hydrogens are substituted by hydrocarbyl groups, e.g. azobenzene or diphenyldiazene.", where Ph stands for phenyl group. The more stable derivatives contain two aryl groups. The group is called an azo group (, ). Many textile and leather articles are dyed with azo dyes and pigments. Aryl azo compounds Aryl azo compounds are usually stable, crystalline species. Azobenzene is the prototypical aromatic azo compound. It exists mainly as the trans isomer, but upon illumination, converts to the cis isomer. Aromatic azo compounds can be synthesized by azo coupling, which entails an electrophilic substitution reaction where an aryl diazonium cation is attacked by another aryl ring, especially those substituted with electron-donating groups: Since diazonium salts are often unstable near room temperature, the azo coupling reactions are typically conducted near 0 °C. The oxidation of hydrazines () also gives azo compounds. Azo dyes are also prepared by the condensation of nitroaromatics with anilines followed by reduction of the resulting azoxy intermediate: For textile dying, a typical nitro coupling partner would be disodium 4,4′-dinitrostilbene-2,2′-disulfonate. Typical aniline partners are shown below. As a consequence of π-delocalization, aryl azo compounds have vivid colors, especially reds, oranges, and yellows. Therefore, they are used as dyes, and are commonly known as azo dyes, an example of which is Disperse Orange 1. Some azo compounds, e.g., methyl orange, are used as acid-base indicators due to the different colors of their acid and salt forms. Most DVD-R/+R and some CD-R discs use blue azo dye as the recording layer. The commercial success of azo dyes motivated the development of azo compounds in general. Alkyl azo compounds Aliphatic azo compounds (R and/or R′ = aliphatic) are less commonly encountered than the aryl azo compounds. A commercially important alkyl azo compound is azobisisobutyronitrile (AIBN), which is widely used as an initiator in free-radical polymerizations and other radical-induced reactions. It achieves this initiation by decomposition, eliminating a molecule of nitrogen gas to form two 2-cyanoprop-2-yl radicals: For instance a mixture of styrene and maleic anhydride in toluene will react if heated, forming the copolymer upon addition of AIBN. A simple dialkyl diazo compound is diethyldiazene, , which can be synthesized through a variant of the Ramberg–Bäcklund reaction. Because of their instability, aliphatic azo compounds pose the risk of explosion. AIBN is produced by converting acetone cyanohydrin to the hydrazine derivative followed by oxidation: Safety and regulation Many azo pigments are non-toxic, although some, such as dinitroaniline orange, ortho-nitroaniline orange, or pigment orange 1, 2, and 5 have been found to be mutagenic. Likewise, several case studies have linked azo pigments with basal cell carcinoma. European regulation Certain azo dyes can break down under reductive conditions to release any of a group of defined aromatic amines. Consumer goods which contain listed aromatic amines originating from azo dyes were prohibited from manufacture and sale in European Union countries in September 2003. As only a small number of dyes contained an equally small number of amines, relatively few products were affected. See also Azo coupling References Functional groups
Azo compound
[ "Chemistry" ]
856
[ "Functional groups" ]
1,115,052
https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler%20divergence
In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted , is a type of statistical distance: a measure of how much a model probability distribution is different from a true probability distribution . Mathematically, it is defined as A simple interpretation of the KL divergence of from is the expected excess surprise from using as a model instead of when the actual distribution is . While it is a measure of how different two distributions are, and in some sense is thus a "distance", it is not actually a metric, which is the most familiar and formal type of distance. In particular, it is not symmetric in the two distributions (in contrast to variation of information), and does not satisfy the triangle inequality. Instead, in terms of information geometry, it is a type of divergence, a generalization of squared distance, and for certain classes of distributions (notably an exponential family), it satisfies a generalized Pythagorean theorem (which applies to squared distances). Relative entropy is always a non-negative real number, with value 0 if and only if the two distributions in question are identical. It has diverse applications, both theoretical, such as characterizing the relative (Shannon) entropy in information systems, randomness in continuous time-series, and information gain when comparing statistical models of inference; and practical, such as applied statistics, fluid mechanics, neuroscience, bioinformatics, and machine learning. Introduction and context Consider two probability distributions and . Usually, represents the data, the observations, or a measured probability distribution. Distribution represents instead a theory, a model, a description or an approximation of . The Kullback–Leibler divergence is then interpreted as the average difference of the number of bits required for encoding samples of using a code optimized for rather than one optimized for . Note that the roles of and can be reversed in some situations where that is easier to compute, such as with the expectation–maximization algorithm (EM) and evidence lower bound (ELBO) computations. Etymology The relative entropy was introduced by Solomon Kullback and Richard Leibler in as "the mean information for discrimination between and per observation from ", where one is comparing two probability measures , and are the hypotheses that one is selecting from measure (respectively). They denoted this by , and defined the "'divergence' between and " as the symmetrized quantity , which had already been defined and used by Harold Jeffreys in 1948. In , the symmetrized form is again referred to as the "divergence", and the relative entropies in each direction are referred to as a "directed divergences" between two distributions; Kullback preferred the term discrimination information. The term "divergence" is in contrast to a distance (metric), since the symmetrized divergence does not satisfy the triangle inequality. Numerous references to earlier uses of the symmetrized divergence and to other statistical distances are given in . The asymmetric "directed divergence" has come to be known as the Kullback–Leibler divergence, while the symmetrized "divergence" is now referred to as the Jeffreys divergence. Definition For discrete probability distributions and defined on the same sample space, the relative entropy from to is defined to be which is equivalent to In other words, it is the expectation of the logarithmic difference between the probabilities and , where the expectation is taken using the probabilities . Relative entropy is only defined in this way if, for all , implies (absolute continuity). Otherwise, it is often defined as but the value is possible even if everywhere, provided that is infinite in extent. Analogous comments apply to the continuous and general measure cases defined below. Whenever is zero the contribution of the corresponding term is interpreted as zero because For distributions and of a continuous random variable, relative entropy is defined to be the integral where and denote the probability densities of and . More generally, if and are probability measures on a measurable space and is absolutely continuous with respect to , then the relative entropy from to is defined as where is the Radon–Nikodym derivative of with respect to , i.e. the unique almost everywhere defined function on such that which exists because is absolutely continuous with respect to . Also we assume the expression on the right-hand side exists. Equivalently (by the chain rule), this can be written as which is the entropy of relative to . Continuing in this case, if is any measure on for which densities and with and exist (meaning that and are both absolutely continuous with respect to ), then the relative entropy from to is given as Note that such a measure for which densities can be defined always exists, since one can take although in practice it will usually be one that in the context like counting measure for discrete distributions, or Lebesgue measure or a convenient variant thereof like Gaussian measure or the uniform measure on the sphere, Haar measure on a Lie group etc. for continuous distributions. The logarithms in these formulae are usually taken to base 2 if information is measured in units of bits, or to base if information is measured in nats. Most formulas involving relative entropy hold regardless of the base of the logarithm. Various conventions exist for referring to in words. Often it is referred to as the divergence between and , but this fails to convey the fundamental asymmetry in the relation. Sometimes, as in this article, it may be described as the divergence of from or as the divergence from to . This reflects the asymmetry in Bayesian inference, which starts from a prior and updates to the posterior . Another common way to refer to is as the relative entropy of with respect to or the information gain from over . Basic example Kullback gives the following example (Table 2.1, Example 2.1). Let and be the distributions shown in the table and figure. is the distribution on the left side of the figure, a binomial distribution with and . is the distribution on the right side of the figure, a discrete uniform distribution with the three possible outcomes , , (i.e. ), each with probability . Relative entropies and are calculated as follows. This example uses the natural log with base , designated to get results in nats (see units of information): Interpretations Statistics In the field of statistics, the Neyman–Pearson lemma states that the most powerful way to distinguish between the two distributions and based on an observation (drawn from one of them) is through the log of the ratio of their likelihoods: . The KL divergence is the expected value of this statistic if is actually drawn from . Kullback motivated the statistic as an expected log likelihood ratio. Coding In the context of coding theory, can be constructed by measuring the expected number of extra bits required to code samples from using a code optimized for rather than the code optimized for . Inference In the context of machine learning, is often called the information gain achieved if would be used instead of which is currently used. By analogy with information theory, it is called the relative entropy of with respect to . Expressed in the language of Bayesian inference, is a measure of the information gained by revising one's beliefs from the prior probability distribution to the posterior probability distribution . In other words, it is the amount of information lost when is used to approximate . Information geometry In applications, typically represents the "true" distribution of data, observations, or a precisely calculated theoretical distribution, while typically represents a theory, model, description, or approximation of . In order to find a distribution that is closest to , we can minimize the KL divergence and compute an information projection. While it is a statistical distance, it is not a metric, the most familiar type of distance, but instead it is a divergence. While metrics are symmetric and generalize linear distance, satisfying the triangle inequality, divergences are asymmetric and generalize squared distance, in some cases satisfying a generalized Pythagorean theorem. In general does not equal , and the asymmetry is an important part of the geometry. The infinitesimal form of relative entropy, specifically its Hessian, gives a metric tensor that equals the Fisher information metric; see . Fisher information metric on the certain probability distribution let determine the natural gradient for information-geometric optimization algorithms. Its quantum version is Fubini-study metric. Relative entropy satisfies a generalized Pythagorean theorem for exponential families (geometrically interpreted as dually flat manifolds), and this allows one to minimize relative entropy by geometric means, for example by information projection and in maximum likelihood estimation. The relative entropy is the Bregman divergence generated by the negative entropy, but it is also of the form of an -divergence. For probabilities over a finite alphabet, it is unique in being a member of both of these classes of statistical divergences. The application of Bregman divergence can be found in mirror descent. Finance (game theory) Consider a growth-optimizing investor in a fair game with mutually exclusive outcomes (e.g. a “horse race” in which the official odds add up to one). The rate of return expected by such an investor is equal to the relative entropy between the investor's believed probabilities and the official odds. This is a special case of a much more general connection between financial returns and divergence measures. Financial risks are connected to via information geometry. Investors' views, the prevailing market view, and risky scenarios form triangles on the relevant manifold of probability distributions. The shape of the triangles determines key financial risks (both qualitatively and quantitatively). For instance, obtuse triangles in which investors' views and risk scenarios appear on “opposite sides” relative to the market describe negative risks, acute triangles describe positive exposure, and the right-angled situation in the middle corresponds to zero risk. Extending this concept, relative entropy can be hypothetically utilised to identify the behaviour of informed investors, if one takes this to be represented by the magnitude and deviations away from the prior expectations of fund flows, for example. Motivation In information theory, the Kraft–McMillan theorem establishes that any directly decodable coding scheme for coding a message to identify one value out of a set of possibilities can be seen as representing an implicit probability distribution over , where is the length of the code for in bits. Therefore, relative entropy can be interpreted as the expected extra message-length per datum that must be communicated if a code that is optimal for a given (wrong) distribution is used, compared to using a code based on the true distribution : it is the excess entropy. where is the cross entropy of relative to and is the entropy of (which is the same as the cross-entropy of P with itself). The relative entropy can be thought of geometrically as a statistical distance, a measure of how far the distribution is from the distribution . Geometrically it is a divergence: an asymmetric, generalized form of squared distance. The cross-entropy is itself such a measurement (formally a loss function), but it cannot be thought of as a distance, since is not zero. This can be fixed by subtracting to make agree more closely with our notion of distance, as the excess loss. The resulting function is asymmetric, and while this can be symmetrized (see ), the asymmetric form is more useful. See for more on the geometric interpretation. Relative entropy relates to "rate function" in the theory of large deviations. Arthur Hobson proved that relative entropy is the only measure of difference between probability distributions that satisfies some desired properties, which are the canonical extension to those appearing in a commonly used characterization of entropy. Consequently, mutual information is the only measure of mutual dependence that obeys certain related conditions, since it can be defined in terms of Kullback–Leibler divergence. Properties Relative entropy is always non-negative, a result known as Gibbs' inequality, with equals zero if and only if as measures. In particular, if and , then -almost everywhere. The entropy thus sets a minimum value for the cross-entropy , the expected number of bits required when using a code based on rather than ; and the Kullback–Leibler divergence therefore represents the expected number of extra bits that must be transmitted to identify a value drawn from , if a code is used corresponding to the probability distribution , rather than the "true" distribution . No upper-bound exists for the general case. However, it is shown that if and are two discrete probability distributions built by distributing the same discrete quantity, then the maximum value of can be calculated. Relative entropy remains well-defined for continuous distributions, and furthermore is invariant under parameter transformations. For example, if a transformation is made from variable to variable , then, since and where is the absolute value of the derivative or more generally of the Jacobian, the relative entropy may be rewritten: where and . Although it was assumed that the transformation was continuous, this need not be the case. This also shows that the relative entropy produces a dimensionally consistent quantity, since if is a dimensioned variable, and are also dimensioned, since e.g. is dimensionless. The argument of the logarithmic term is and remains dimensionless, as it must. It can therefore be seen as in some ways a more fundamental quantity than some other properties in information theory (such as self-information or Shannon entropy), which can become undefined or negative for non-discrete probabilities. Relative entropy is additive for independent distributions in much the same way as Shannon entropy. If are independent distributions, and , and likewise for independent distributions then Relative entropy is convex in the pair of probability measures , i.e. if and are two pairs of probability measures then may be Taylor expanded about its minimum (i.e. ) as which converges if and only if almost surely w.r.t . Denote and note that . The first derivative of may be derived and evaluated as follows Further derivatives may be derived and evaluated as follows Hence solving for via the Taylor expansion of about evaluated at yields a.s. is a sufficient condition for convergence of the series by the following absolute convergence argument a.s. is also a necessary condition for convergence of the series by the following proof by contradiction. Assume that with measure strictly greater than . It then follows that there must exist some values , , and such that and with measure . The previous proof of sufficiency demonstrated that the measure component of the series where is bounded, so we need only concern ourselves with the behavior of the measure component of the series where . The absolute value of the th term of this component of the series is then lower bounded by , which is unbounded as , so the series diverges. Duality formula for variational inference The following result, due to Donsker and Varadhan, is known as Donsker and Varadhan's variational formula. Examples Multivariate normal distributions Suppose that we have two multivariate normal distributions, with means and with (non-singular) covariance matrices If the two distributions have the same dimension, , then the relative entropy between the distributions is as follows: The logarithm in the last term must be taken to base since all terms apart from the last are base- logarithms of expressions that are either factors of the density function or otherwise arise naturally. The equation therefore gives a result measured in nats. Dividing the entire expression above by yields the divergence in bits. In a numerical implementation, it is helpful to express the result in terms of the Cholesky decompositions such that and . Then with and solutions to the triangular linear systems , and , A special case, and a common quantity in variational inference, is the relative entropy between a diagonal multivariate normal, and a standard normal distribution (with zero mean and unit variance): For two univariate normal distributions and the above simplifies to In the case of co-centered normal distributions with , this simplifies to: Uniform distributions Consider two uniform distributions, with the support of enclosed within (). Then the information gain is: Intuitively, the information gain to a times narrower uniform distribution contains bits. This connects with the use of bits in computing, where bits would be needed to identify one element of a long stream. Relation to metrics While relative entropy is a statistical distance, it is not a metric on the space of probability distributions, but instead it is a divergence. While metrics are symmetric and generalize linear distance, satisfying the triangle inequality, divergences are asymmetric in general and generalize squared distance, in some cases satisfying a generalized Pythagorean theorem. In general does not equal , and while this can be symmetrized (see ), the asymmetry is an important part of the geometry. It generates a topology on the space of probability distributions. More concretely, if is a sequence of distributions such that , then it is said that . Pinsker's inequality entails that , where the latter stands for the usual convergence in total variation. Fisher information metric Relative entropy is directly related to the Fisher information metric. This can be made explicit as follows. Assume that the probability distributions and are both parameterized by some (possibly multi-dimensional) parameter . Consider then two close by values of and so that the parameter differs by only a small amount from the parameter value . Specifically, up to first order one has (using the Einstein summation convention) with a small change of in the direction, and the corresponding rate of change in the probability distribution. Since relative entropy has an absolute minimum 0 for , i.e. , it changes only to second order in the small parameters . More formally, as for any minimum, the first derivatives of the divergence vanish and by the Taylor expansion one has up to second order where the Hessian matrix of the divergence must be positive semidefinite. Letting vary (and dropping the subindex 0) the Hessian defines a (possibly degenerate) Riemannian metric on the parameter space, called the Fisher information metric. Fisher information metric theorem When satisfies the following regularity conditions: exist, where is independent of then: Variation of information Another information-theoretic metric is variation of information, which is roughly a symmetrization of conditional entropy. It is a metric on the set of partitions of a discrete probability space. MAUVE Metric MAUVE is a measure of the statistical gap between two text distributions, such as the difference between text generated by a model and human-written text. This measure is computed using Kullback-Leibler divergences between the two distributions in a quantized embedding space of a foundation model. Relation to other quantities of information theory Many of the other quantities of information theory can be interpreted as applications of relative entropy to specific cases. Self-information The self-information, also known as the information content of a signal, random variable, or event is defined as the negative logarithm of the probability of the given outcome occurring. When applied to a discrete random variable, the self-information can be represented as is the relative entropy of the probability distribution from a Kronecker delta representing certainty that — i.e. the number of extra bits that must be transmitted to identify if only the probability distribution is available to the receiver, not the fact that . Mutual information The mutual information, is the relative entropy of the joint probability distribution from the product of the two marginal probability distributions — i.e. the expected number of extra bits that must be transmitted to identify and if they are coded using only their marginal distributions instead of the joint distribution. Equivalently, if the joint probability is known, it is the expected number of extra bits that must on average be sent to identify if the value of is not already known to the receiver. Shannon entropy The Shannon entropy, is the number of bits which would have to be transmitted to identify from equally likely possibilities, less the relative entropy of the uniform distribution on the random variates of , , from the true distribution — i.e. less the expected number of bits saved, which would have had to be sent if the value of were coded according to the uniform distribution rather than the true distribution . This definition of Shannon entropy forms the basis of E.T. Jaynes's alternative generalization to continuous distributions, the limiting density of discrete points (as opposed to the usual differential entropy), which defines the continuous entropy as which is equivalent to: Conditional entropy The conditional entropy, is the number of bits which would have to be transmitted to identify from equally likely possibilities, less the relative entropy of the product distribution from the true joint distribution — i.e. less the expected number of bits saved which would have had to be sent if the value of were coded according to the uniform distribution rather than the conditional distribution of given . Cross entropy When we have a set of possible events, coming from the distribution , we can encode them (with a lossless data compression) using entropy encoding. This compresses the data by replacing each fixed-length input symbol with a corresponding unique, variable-length, prefix-free code (e.g.: the events (A, B, C) with probabilities p = (1/2, 1/4, 1/4) can be encoded as the bits (0, 10, 11)). If we know the distribution in advance, we can devise an encoding that would be optimal (e.g.: using Huffman coding). Meaning the messages we encode will have the shortest length on average (assuming the encoded events are sampled from ), which will be equal to Shannon's Entropy of (denoted as ). However, if we use a different probability distribution () when creating the entropy encoding scheme, then a larger number of bits will be used (on average) to identify an event from a set of possibilities. This new (larger) number is measured by the cross entropy between and . The cross entropy between two probability distributions ( and ) measures the average number of bits needed to identify an event from a set of possibilities, if a coding scheme is used based on a given probability distribution , rather than the "true" distribution . The cross entropy for two distributions and over the same probability space is thus defined as follows. For explicit derivation of this, see the Motivation section above. Under this scenario, relative entropies (kl-divergence) can be interpreted as the extra number of bits, on average, that are needed (beyond ) for encoding the events because of using for constructing the encoding scheme instead of . Bayesian updating In Bayesian statistics, relative entropy can be used as a measure of the information gain in moving from a prior distribution to a posterior distribution: . If some new fact is discovered, it can be used to update the posterior distribution for from to a new posterior distribution using Bayes' theorem: This distribution has a new entropy: which may be less than or greater than the original entropy . However, from the standpoint of the new probability distribution one can estimate that to have used the original code based on instead of a new code based on would have added an expected number of bits: to the message length. This therefore represents the amount of useful information, or information gain, about , that has been learned by discovering . If a further piece of data, , subsequently comes in, the probability distribution for can be updated further, to give a new best guess . If one reinvestigates the information gain for using rather than , it turns out that it may be either greater or less than previously estimated: may be ≤ or > than and so the combined information gain does not obey the triangle inequality: may be <, = or > than All one can say is that on average, averaging using , the two sides will average out. Bayesian experimental design A common goal in Bayesian experimental design is to maximise the expected relative entropy between the prior and the posterior. When posteriors are approximated to be Gaussian distributions, a design maximising the expected relative entropy is called Bayes d-optimal. Discrimination information Relative entropy can also be interpreted as the expected discrimination information for over : the mean information per sample for discriminating in favor of a hypothesis against a hypothesis , when hypothesis is true. Another name for this quantity, given to it by I. J. Good, is the expected weight of evidence for over to be expected from each sample. The expected weight of evidence for over is not the same as the information gain expected per sample about the probability distribution of the hypotheses, Either of the two quantities can be used as a utility function in Bayesian experimental design, to choose an optimal next question to investigate: but they will in general lead to rather different experimental strategies. On the entropy scale of information gain there is very little difference between near certainty and absolute certainty—coding according to a near certainty requires hardly any more bits than coding according to an absolute certainty. On the other hand, on the logit scale implied by weight of evidence, the difference between the two is enormous – infinite perhaps; this might reflect the difference between being almost sure (on a probabilistic level) that, say, the Riemann hypothesis is correct, compared to being certain that it is correct because one has a mathematical proof. These two different scales of loss function for uncertainty are both useful, according to how well each reflects the particular circumstances of the problem in question. Principle of minimum discrimination information The idea of relative entropy as discrimination information led Kullback to propose the Principle of (MDI): given new facts, a new distribution should be chosen which is as hard to discriminate from the original distribution as possible; so that the new data produces as small an information gain as possible. For example, if one had a prior distribution over and , and subsequently learnt the true distribution of was , then the relative entropy between the new joint distribution for and , , and the earlier prior distribution would be: i.e. the sum of the relative entropy of the prior distribution for from the updated distribution , plus the expected value (using the probability distribution ) of the relative entropy of the prior conditional distribution from the new conditional distribution . (Note that often the later expected value is called the conditional relative entropy (or conditional Kullback–Leibler divergence) and denoted by ) This is minimized if over the whole support of ; and we note that this result incorporates Bayes' theorem, if the new distribution is in fact a δ function representing certainty that has one particular value. MDI can be seen as an extension of Laplace's Principle of Insufficient Reason, and the Principle of Maximum Entropy of E.T. Jaynes. In particular, it is the natural extension of the principle of maximum entropy from discrete to continuous distributions, for which Shannon entropy ceases to be so useful (see differential entropy), but the relative entropy continues to be just as relevant. In the engineering literature, MDI is sometimes called the Principle of Minimum Cross-Entropy (MCE) or Minxent for short. Minimising relative entropy from to with respect to is equivalent to minimizing the cross-entropy of and , since which is appropriate if one is trying to choose an adequate approximation to . However, this is just as often not the task one is trying to achieve. Instead, just as often it is that is some fixed prior reference measure, and that one is attempting to optimise by minimising subject to some constraint. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by redefining cross-entropy to be , rather than . Relationship to available work Surprisals add where probabilities multiply. The surprisal for an event of probability is defined as . If is then surprisal is in nats, bits, or so that, for instance, there are bits of surprisal for landing all "heads" on a toss of coins. Best-guess states (e.g. for atoms in a gas) are inferred by maximizing the average surprisal (entropy) for a given set of control parameters (like pressure or volume ). This constrained entropy maximization, both classically and quantum mechanically, minimizes Gibbs availability in entropy units where is a constrained multiplicity or partition function. When temperature is fixed, free energy () is also minimized. Thus if and number of molecules are constant, the Helmholtz free energy (where is energy and is entropy) is minimized as a system "equilibrates." If and are held constant (say during processes in your body), the Gibbs free energy is minimized instead. The change in free energy under these conditions is a measure of available work that might be done in the process. Thus available work for an ideal gas at constant temperature and pressure is where and (see also Gibbs inequality). More generally the work available relative to some ambient is obtained by multiplying ambient temperature by relative entropy or net surprisal defined as the average value of where is the probability of a given state under ambient conditions. For instance, the work available in equilibrating a monatomic ideal gas to ambient values of and is thus , where relative entropy The resulting contours of constant relative entropy, shown at right for a mole of Argon at standard temperature and pressure, for example put limits on the conversion of hot to cold as in flame-powered air-conditioning or in the unpowered device to convert boiling-water to ice-water discussed here. Thus relative entropy measures thermodynamic availability in bits. Quantum information theory For density matrices and on a Hilbert space, the quantum relative entropy from to is defined to be In quantum information science the minimum of over all separable states can also be used as a measure of entanglement in the state . Relationship between models and reality Just as relative entropy of "actual from ambient" measures thermodynamic availability, relative entropy of "reality from a model" is also useful even if the only clues we have about reality are some experimental measurements. In the former case relative entropy describes distance to equilibrium or (when multiplied by ambient temperature) the amount of available work, while in the latter case it tells you about surprises that reality has up its sleeve or, in other words, how much the model has yet to learn. Although this tool for evaluating models against systems that are accessible experimentally may be applied in any field, its application to selecting a statistical model via Akaike information criterion are particularly well described in papers and a book by Burnham and Anderson. In a nutshell the relative entropy of reality from a model may be estimated, to within a constant additive term, by a function of the deviations observed between data and the model's predictions (like the mean squared deviation) . Estimates of such divergence for models that share the same additive term can in turn be used to select among models. When trying to fit parametrized models to data there are various estimators which attempt to minimize relative entropy, such as maximum likelihood and maximum spacing estimators. Symmetrised divergence also considered the symmetrized function: which they referred to as the "divergence", though today the "KL divergence" refers to the asymmetric function (see for the evolution of the term). This function is symmetric and nonnegative, and had already been defined and used by Harold Jeffreys in 1948; it is accordingly called the Jeffreys divergence. This quantity has sometimes been used for feature selection in classification problems, where and are the conditional pdfs of a feature under two different classes. In the Banking and Finance industries, this quantity is referred to as Population Stability Index (PSI), and is used to assess distributional shifts in model features through time. An alternative is given via the -divergence, which can be interpreted as the expected information gain about from discovering which probability distribution is drawn from, or , if they currently have probabilities and respectively. The value gives the Jensen–Shannon divergence, defined by where is the average of the two distributions, We can also interpret as the capacity of a noisy information channel with two inputs giving the output distributions and . The Jensen–Shannon divergence, like all -divergences, is locally proportional to the Fisher information metric. It is similar to the Hellinger metric (in the sense that it induces the same affine connection on a statistical manifold). Furthermore, the Jensen–Shannon divergence can be generalized using abstract statistical M-mixtures relying on an abstract mean M. Relationship to other probability-distance measures There are many other important measures of probability distance. Some of these are particularly connected with relative entropy. For example: The total-variation distance, . This is connected to the divergence through Pinsker's inequality: Pinsker's inequality is vacuous for any distributions where , since the total variation distance is at most . For such distributions, an alternative bound can be used, due to Bretagnolle and Huber (see, also, Tsybakov): The family of Rényi divergences generalize relative entropy. Depending on the value of a certain parameter, , various inequalities may be deduced. Other notable measures of distance include the Hellinger distance, histogram intersection, Chi-squared statistic, quadratic form distance, match distance, Kolmogorov–Smirnov distance, and earth mover's distance. Data differencing Just as absolute entropy serves as theoretical background for data compression, relative entropy serves as theoretical background for data differencing – the absolute entropy of a set of data in this sense being the data required to reconstruct it (minimum compressed size), while the relative entropy of a target set of data, given a source set of data, is the data required to reconstruct the target given the source (minimum size of a patch). See also Akaike information criterion Bayesian information criterion Bregman divergence Cross-entropy Deviance information criterion Entropic value at risk Entropy power inequality Hellinger distance Information gain in decision trees Information gain ratio Information theory and measure theory Jensen–Shannon divergence Quantum relative entropy Solomon Kullback and Richard Leibler Bhattacharyya distance References . Republished by Dover Publications in 1968; reprinted in 1978: . External links Information Theoretical Estimators Toolbox Ruby gem for calculating Kullback–Leibler divergence Jon Shlens' tutorial on Kullback–Leibler divergence and likelihood theory Matlab code for calculating Kullback–Leibler divergence for discrete distributions Sergio Verdú, Relative Entropy, NIPS 2009. One-hour video lecture. A modern summary of info-theoretic divergence measures Entropy and information F-divergences Information geometry Thermodynamics
Kullback–Leibler divergence
[ "Physics", "Chemistry", "Mathematics" ]
7,191
[ "Mathematical structures", "Physical quantities", "Entropy and information", "Entropy", "Category theory", "Thermodynamics", "Information geometry", "Dynamical systems" ]
1,115,085
https://en.wikipedia.org/wiki/Magnetic%20circuit
A magnetic circuit is made up of one or more closed loop paths containing a magnetic flux. The flux is usually generated by permanent magnets or electromagnets and confined to the path by magnetic cores consisting of ferromagnetic materials like iron, although there may be air gaps or other materials in the path. Magnetic circuits are employed to efficiently channel magnetic fields in many devices such as electric motors, generators, transformers, relays, lifting electromagnets, SQUIDs, galvanometers, and magnetic recording heads. The relation between magnetic flux, magnetomotive force, and magnetic reluctance in an unsaturated magnetic circuit can be described by Hopkinson's law, which bears a superficial resemblance to Ohm's law in electrical circuits, resulting in a one-to-one correspondence between properties of a magnetic circuit and an analogous electric circuit. Using this concept the magnetic fields of complex devices such as transformers can be quickly solved using the methods and techniques developed for electrical circuits. Some examples of magnetic circuits are: horseshoe magnet with iron keeper (low-reluctance circuit) horseshoe magnet with no keeper (high-reluctance circuit) electric motor (variable-reluctance circuit) some types of pickup cartridge (variable-reluctance circuits) Magnetomotive force (MMF) Similar to the way that electromotive force (EMF) drives a current of electrical charge in electrical circuits, magnetomotive force (MMF) 'drives' magnetic flux through magnetic circuits. The term 'magnetomotive force', though, is a misnomer since it is not a force nor is anything moving. It is perhaps better to call it simply MMF. In analogy to the definition of EMF, the magnetomotive force around a closed loop is defined as: The MMF represents the potential that a hypothetical magnetic charge would gain by completing the loop. The magnetic flux that is driven is not a current of magnetic charge; it merely has the same relationship to MMF that electric current has to EMF. (See microscopic origins of reluctance below for a further description.) The unit of magnetomotive force is the ampere-turn (At), represented by a steady, direct electric current of one ampere flowing in a single-turn loop of electrically conducting material in a vacuum. The gilbert (Gb), established by the IEC in 1930, is the CGS unit of magnetomotive force and is a slightly smaller unit than the ampere-turn. The unit is named after William Gilbert (1544–1603) English physician and natural philosopher. The magnetomotive force can often be quickly calculated using Ampère's law. For example, the magnetomotive force of a long coil is: where N is the number of turns and I is the current in the coil. In practice this equation is used for the MMF of real inductors with N being the winding number of the inducting coil. Magnetic flux An applied MMF 'drives' magnetic flux through the magnetic components of the system. The magnetic flux through a magnetic component is proportional to the number of magnetic field lines that pass through the cross sectional area of that component. This is the net number, i.e. the number passing through in one direction, minus the number passing through in the other direction. The direction of the magnetic field vector B is by definition from the south to the north pole of a magnet inside the magnet; outside the field lines go from north to south. The flux through an element of area perpendicular to the direction of magnetic field is given by the product of the magnetic field and the area element. More generally, magnetic flux Φ is defined by a scalar product of the magnetic field and the area element vector. Quantitatively, the magnetic flux through a surface S is defined as the integral of the magnetic field over the area of the surface For a magnetic component the area S used to calculate the magnetic flux Φ is usually chosen to be the cross-sectional area of the component. The SI unit of magnetic flux is the weber (in derived units: volt-seconds), and the unit of magnetic flux density (or "magnetic induction", ) is the weber per square meter, or tesla. Circuit models The most common way of representing a magnetic circuit is the resistance–reluctance model, which draws an analogy between electrical and magnetic circuits. This model is good for systems that contain only magnetic components, but for modelling a system that contains both electrical and magnetic parts it has serious drawbacks. It does not properly model power and energy flow between the electrical and magnetic domains. This is because electrical resistance will dissipate energy whereas magnetic reluctance stores it and returns it later. An alternative model that correctly models energy flow is the gyrator–capacitor model. Resistance–reluctance model The resistance–reluctance model for magnetic circuits is a lumped-element model that makes electrical resistance analogous to magnetic reluctance. Hopkinson's law In electrical circuits, Ohm's law is an empirical relation between the EMF applied across an element and the current it generates through that element. It is written as: where R is the electrical resistance of that material. There is a counterpart to Ohm's law used in magnetic circuits. This law is often called Hopkinson's law, after John Hopkinson, but was actually formulated earlier by Henry Augustus Rowland in 1873. It states that where is the magnetomotive force (MMF) across a magnetic element, is the magnetic flux through the magnetic element, and is the magnetic reluctance of that element. (It will be shown later that this relationship is due to the empirical relationship between the H-field and the magnetic field B, , where μ is the permeability of the material). Like Ohm's law, Hopkinson's law can be interpreted either as an empirical equation that works for some materials, or it may serve as a definition of reluctance. Hopkinson's law is not a correct analogy with Ohm's law in terms of modelling power and energy flow. In particular, there is no power dissipation associated with a magnetic reluctance in the same way as there is a dissipation in an electrical resistance. The magnetic resistance that is a true analogy of electrical resistance in this respect is defined as the ratio of magnetomotive force and the rate of change of magnetic flux. Here rate of change of magnetic flux is standing in for electric current and the Ohm's law analogy becomes, where is the magnetic resistance. This relationship is part of an electrical-magnetic analogy called the gyrator-capacitor model and is intended to overcome the drawbacks of the reluctance model. The gyrator-capacitor model is, in turn, part of a wider group of compatible analogies used to model systems across multiple energy domains. Reluctance Magnetic reluctance, or magnetic resistance, is analogous to resistance in an electrical circuit (although it does not dissipate magnetic energy). In likeness to the way an electric field causes an electric current to follow the path of least resistance, a magnetic field causes magnetic flux to follow the path of least magnetic reluctance. It is a scalar, extensive quantity, akin to electrical resistance. The total reluctance is equal to the ratio of the MMF in a passive magnetic circuit and the magnetic flux in this circuit. In an AC field, the reluctance is the ratio of the amplitude values for a sinusoidal MMF and magnetic flux. (see phasors) The definition can be expressed as: where is the reluctance in ampere-turns per weber (a unit that is equivalent to turns per henry). Magnetic flux always forms a closed loop, as described by Maxwell's equations, but the path of the loop depends on the reluctance of the surrounding materials. It is concentrated around the path of least reluctance. Air and vacuum have high reluctance, while easily magnetized materials such as soft iron have low reluctance. The concentration of flux in low-reluctance materials forms strong temporary poles and causes mechanical forces that tend to move the materials towards regions of higher flux so it is always an attractive force(pull). The inverse of reluctance is called permeance. Its SI derived unit is the henry (the same as the unit of inductance, although the two concepts are distinct). Permeability and conductivity The reluctance of a magnetically uniform magnetic circuit element can be calculated as: where is the length of the element, is the permeability of the material ( is the relative permeability of the material (dimensionless), and is the permeability of free space), and is the cross-sectional area of the circuit. This is similar to the equation for electrical resistance in materials, with permeability being analogous to conductivity; the reciprocal of the permeability is known as magnetic reluctivity and is analogous to resistivity. Longer, thinner geometries with low permeabilities lead to higher reluctance. Low reluctance, like low resistance in electric circuits, is generally preferred. Summary of analogy The following table summarizes the mathematical analogy between electrical circuit theory and magnetic circuit theory. This is mathematical analogy and not a physical one. Objects in the same row have the same mathematical role; the physics of the two theories are very different. For example, current is the flow of electrical charge, while magnetic flux is not the flow of any quantity. Limitations of the analogy The resistance–reluctance model has limitations. Electric and magnetic circuits are only superficially similar because of the similarity between Hopkinson's law and Ohm's law. Magnetic circuits have significant differences that need to be taken into account in their construction: Electric currents represent the flow of particles (electrons) and carry power, part or all of which is dissipated as heat in resistances. Magnetic fields don't represent a "flow" of anything, and no power is dissipated in reluctances. The current in typical electric circuits is confined to the circuit, with very little "leakage". In typical magnetic circuits not all of the magnetic field is confined to the magnetic circuit because magnetic permeability also exists outside materials (see vacuum permeability). Thus, there may be significant "leakage flux" in the space outside the magnetic cores, which must be taken into account but is often difficult to calculate. Most importantly, magnetic circuits are nonlinear; the reluctance in a magnetic circuit is not constant, as resistance is, but varies depending on the magnetic field. At high magnetic fluxes the ferromagnetic materials used for the cores of magnetic circuits saturate, limiting further increase of the magnetic flux through, so above this level the reluctance increases rapidly. In addition, ferromagnetic materials suffer from hysteresis so the flux in them depends not just on the instantaneous MMF but also on the history of MMF. After the source of the magnetic flux is turned off, remanent magnetism is left in ferromagnetic materials, creating flux with no MMF. Circuit laws Magnetic circuits obey other laws that are similar to electrical circuit laws. For example, the total reluctance of reluctances in series is: This also follows from Ampère's law and is analogous to Kirchhoff's voltage law for adding resistances in series. Also, the sum of magnetic fluxes into any node is always zero: This follows from Gauss's law and is analogous to Kirchhoff's current law for analyzing electrical circuits. Together, the three laws above form a complete system for analysing magnetic circuits, in a manner similar to electric circuits. Comparing the two types of circuits shows that: The equivalent to resistance R is the reluctance The equivalent to current I is the magnetic flux Φ The equivalent to voltage V is the magnetomotive Force F Magnetic circuits can be solved for the flux in each branch by application of the magnetic equivalent of Kirchhoff's voltage law (KVL) for pure source/resistance circuits. Specifically, whereas KVL states that the voltage excitation applied to a loop is equal to the sum of the voltage drops (resistance times current) around the loop, the magnetic analogue states that the magnetomotive force (achieved from ampere-turn excitation) is equal to the sum of MMF drops (product of flux and reluctance) across the rest of the loop. (If there are multiple loops, the current in each branch can be solved through a matrix equation—much as a matrix solution for mesh circuit branch currents is obtained in loop analysis—after which the individual branch currents are obtained by adding and/or subtracting the constituent loop currents as indicated by the adopted sign convention and loop orientations.) Per Ampère's law, the excitation is the product of the current and the number of complete loops made and is measured in ampere-turns. Stated more generally: By Stokes's theorem, the closed line integral of around a contour is equal to the open surface integral of curl across the surface bounded by the closed contour. Since, from Maxwell's equations, , the closed line integral of evaluates to the total current passing through the surface. This is equal to the excitation, , which also measures current passing through the surface, thereby verifying that the net current flow through a surface is zero ampere-turns in a closed system that conserves energy. More complex magnetic systems, where the flux is not confined to a simple loop, must be analysed from first principles by using Maxwell's equations. Applications Air gaps can be created in the cores of certain transformers to reduce the effects of saturation. This increases the reluctance of the magnetic circuit, and enables it to store more energy before core saturation. This effect is used in the flyback transformers of cathode-ray tube video displays and in some types of switch-mode power supply. Variation of reluctance is the principle behind the reluctance motor (or the variable reluctance generator) and the Alexanderson alternator. Multimedia loudspeakers are typically shielded magnetically, in order to reduce magnetic interference caused to televisions and other CRTs. The speaker magnet is covered with a material such as soft iron to minimize the stray magnetic field. Reluctance can also be applied to variable reluctance (magnetic) pickups. See also Magnetic capacitance Magnetic complex reluctance Tokamak References External links Magnetic–Electric Analogs by Dennis L. Feucht, Innovatia Laboratories (PDF) Interactive Java Tutorial on Magnetic Shunts National High Magnetic Field Laboratory Electromagnetism Electric and magnetic fields in matter Electrical analogies
Magnetic circuit
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,965
[ "Electromagnetism", "Physical phenomena", "Electric and magnetic fields in matter", "Materials science", "Fundamental interactions", "Condensed matter physics" ]
1,115,299
https://en.wikipedia.org/wiki/Safety%20testing%20of%20explosives
The safety testing of explosives involves the determination of various properties of the different energetic materials that are used in commercial, mining, and military applications. It is highly desirable to measure the conditions under which explosives can be set off for several reasons, including: safety in handling, safety in storage, and safety in use. It would be very difficult to provide an absolute scale for sensitivity with respect to the different properties of explosives. Therefore, it is generally required that one or more compounds be considered a standard for comparison to those compounds being tested. For example, PETN is considered to be a primary explosive by some individuals, and a secondary explosive by others. As a general rule, PETN is considered to be either a relatively insensitive primary explosive, or one of the most sensitive secondary explosives. PETN may be detonated by striking with a hammer on a hard steel surface (a very dangerous thing to do), and is generally considered the least sensitive explosive with which this may be done. For these facts and other reasons, PETN is considered one standard by which other explosives are gauged. Another explosive that is used as a calibration standard is TNT, which was afforded the arbitrary Figure of Insensitivity of 100. Other explosives could then be compared against this standard. Types of safety testing Because there are different ways to set off explosives, there are several different components to the safety testing of explosives: Impact testing: The impact testing of explosives is performed by dropping a fixed weight onto a prepared sample of the explosive to be tested from a given distance. The weight is released, impacts upon the sample, and the result is noted. The impact distances are determined and the results are analyzed by the sensitivity test and analysis methods selected. The two most common sensitivity test and analysis methods are the Bruceton analysis and Neyer d-optimal test. These methods allow the user to determine the 50% initiation level (the distance at which 50% of the samples will "go"), and a standard deviation. Impact testing may also be performed with liquid samples confined in special cells. Friction testing. There are several techniques through which explosives may be tested to determine their sensitivity to friction. One of the most popular is the ABL friction test, which uses a line of explosives on a prepared metal plate, placed in front of a specially prepared metal wheel that is forced down upon the plate with a hydraulic press. The metal plate is then struck with a pendulum to move it, squeezing the explosives between plate and wheel as the plate moves. Initiation is determined, and analyzed by the Bruceton analysis or Neyer d-optimal test, as above. BAM friction testing is similar, except that the sample is placed on a ceramic plate which is then moved side-to-side as a ceramic peg exerts force on the sample. Electrostatic discharge. Testing for ESD, or "spark" sensitivity of explosives is performed with a machine designed to discharge from a capacitor through a prepared sample. The Sandia National Labs design employs a dipping needle that punctures a sample cell and discharges the spark simultaneously. The amount of energy discharged into the cell becomes the variable in which Bruceton analysis or Neyer d-optimal test is performed to determine spark sensitivity. Thermal sensitivity. Determining the point at which a compound is capable of detonating under confinement with thermal stress is useful. A fixed quantity of material is placed in an aluminum blasting cap shell, and pressed into place with an aluminum plug. The sample is immersed in a hot metal bath, and the time-to-detonation is measured. If over 60 seconds, a fresh sample is run again at a higher temperature. In this manner, it is possible to determine the temperature at which an explosive will detonate on the small scale. Unlike the other tests above, this figure is misleading as explosives have more thermal issues on the large scale. Therefore, the thermal sensitivity figures established using this technique are higher than one would expect in the real world. Thermal safety testing may also be performed via differential scanning calorimetry, in which a small (sub-milligram) sample is placed in a sample cell, and the temperature is increased slowly. The calorimeter determines how much energy is required to increase the temperature of the sample. Using this device, characteristics such as the melting point, phase transitions and decomposition temperature of an explosive may be determined. Used together, these numbers may be used to determine the potential threats afforded by energetic materials when employed in the field. It cannot be stressed enough that these figures are relative; when we determine that impact sensitivity of an explosive is lower for that of a tested explosive than PETN, for example, the number produced in the impact test is dimensionless, but it means that it is expected that it would take a greater impact to detonate it than PETN. Therefore, an experienced ordnance technician who works with raw PETN will know that the new explosive is not as sensitive with regards to impact. However, it could be more sensitive to friction, spark, or thermal issues. These conditions must be taken into account before any compound is to be stored, handled, or used in the field. Fireworks In the Netherlands, the Netherlands Organisation for Applied Scientific Research tests the safety of fireworks. According to a 2017 report by the Dutch Safety Board, 25% of all fireworks tested failed to meet safety standards and were banned from sale. Since 2010, safety testing of fireworks is required in the entire European Union, but companies are allowed to test their products in one member state before importing and selling them in another. References Explosives engineering Explosives Safety
Safety testing of explosives
[ "Chemistry", "Engineering" ]
1,130
[ "Explosives engineering", "Explosives", "Explosions" ]
1,115,424
https://en.wikipedia.org/wiki/New%20eugenics
New eugenics, also known as liberal eugenics (a term coined by bioethicist Nicholas Agar), advocates enhancing human characteristics and capacities through the use of reproductive technology and human genetic engineering. Those who advocate new eugenics generally think selecting or altering embryos should be left to the preferences of parents, rather than forbidden (or left to the preferences of the state). "New" eugenics purports to distinguish itself from the forms of eugenics practiced and advocated in the 20th century, which fell into disrepute after World War II. New eugenics practices Eugenics is sometimes broken into the categories of positive eugenics (encouraging reproduction among the designated "fit") and negative eugenics (discouraging or prohibiting reproduction among those designated "unfit"). Both positive and negative eugenic programs were advocated and pursued during the early 20th century. Negative programs were responsible for the compulsory sterilization of hundreds of thousands of persons in many countries, and were included in much of the rhetoric of Nazi eugenic policies of racial hygiene and genocide. According to its proponents, new eugenics belongs in the category of positive eugenics. New eugenics generally supports genetic modification or genetic selection of individuals for traits that are supposed to improve human welfare. The underlying idea is to improve the genetic basis of future generations and reduce incidence of genetic diseases and other undesirable traits. Some of the practices included in new eugenics are: pre-implantation diagnosis and embryo selection, selective breeding, and human embryo engineering and gene therapy. Ethical status New eugenics was founded under the liberal ethical values of pluralism, which advocates for the respect of personal autonomy, and egalitarianism, which represents the idea of equality for all people. Arguments used in favor of new eugenics include that it is in the best interest of society that life succeeds rather than fail, and that it is acceptable to ensure that progeny has a chance of achieving this success. Ethical arguments against new eugenics include the claim that creating designer babies is not in the best interest of society as it might create a breach between genetically modified individuals and natural individuals. Additionally, some of these technologies might be economically restrictive further increasing the socio-economical gap. Dov Fox, a law professor at the University of San Diego, argues that liberal eugenics cannot be justified on the basis of the underlying liberal theory which inspires its name. Instead he favors traditional, coersive eugenics, arguing that reprogenetic technologies like embryo selection, cellular surgery, and human genetic engineering, which aim to enhance general purpose traits in offspring, are not practices a liberal government leaves to the discretion of parents, but practices the state makes compulsory. Fox argues that if the liberal commitment to autonomy is important enough for the state to mandate childrearing practices such as health care and basic education, that very same interest is important enough for the state to mandate safe, effective, and functionally integrated genetic practices that act on analogous all-purpose traits such as resistance to disease and general cognitive functioning. He concludes that the liberal case for compulsory eugenics is a reductio ad absurdum against liberal theory. The United Nations International Bioethics Committee wrote that new eugenics should not be confused with the ethical problems of the 20th century eugenics movements. They have also stated the notion is nevertheless problematic as it challenges the idea of human equality and opens up new ways of discrimination and stigmatization against those who do not want or cannot afford the enhancements. See also Biohappiness Directed evolution (transhumanism) Mendelian inheritance Eugenics in France Notes References Further reading Applied genetics Bioethics Medical ethics Transhumanism
New eugenics
[ "Technology", "Engineering", "Biology" ]
745
[ "Bioethics", "Genetic engineering", "Transhumanism", "Ethics of science and technology" ]
1,116,216
https://en.wikipedia.org/wiki/Delta%20ray
A delta ray is a secondary electron with enough energy to escape a significant distance away from the primary radiation beam and produce further ionization. The term is sometimes used to describe any recoil particle caused by secondary ionization. The term was coined by J. J. Thomson. Characteristics A delta ray is characterized by very fast electrons produced in quantity by alpha particles or other fast energetic charged particles knocking orbiting electrons out of atoms. Collectively, these electrons are defined as delta radiation when they have sufficient energy to ionize further atoms through subsequent interactions on their own. Delta rays appear as branches in the main track of a cloud chamber (See Figs. 1,2). These branches will appear nearer the start of the track of a heavy charged particle, where more energy is imparted to the ionized electrons. Delta rays in particle accelerators Otherwise called a knock-on electron, the term "delta ray" is also used in high energy physics to describe single electrons in particle accelerators that are exhibiting characteristic deceleration. In a bubble chamber, electrons will lose their energy more quickly than other particles through Bremsstrahlung and will create a spiral track due to their small mass and the magnetic field. The Bremsstrahlung rate is proportional to the square of the acceleration of the electron. Epsilon ray An Epsilon ray or Epsilon radiation is a type of tertiary radiation. Epsilon rays are a form of particle radiation and are composed of electrons. The term was coined by J. J. Thomson, but is very rarely used as of 2019. See also List of particles Particle physics Radioactivity Radiation Rays: α (alpha) rays β (beta) rays X-ray γ (gamma) rays References "Delta ray" on Britannica Online "Delta electrons" in the McGraw-Hill Encyclopedia of Science & Technology Online Radiation Radioactivity Electron
Delta ray
[ "Physics", "Chemistry" ]
374
[ "Transport phenomena", "Electron", "Physical phenomena", "Molecular physics", "Waves", "Radiation", "Nuclear physics", "Radioactivity" ]
1,117,315
https://en.wikipedia.org/wiki/Lyapunov%20time
In mathematics, the Lyapunov time is the characteristic timescale on which a dynamical system is chaotic. It is named after the Russian mathematician Aleksandr Lyapunov. It is defined as the inverse of a system's largest Lyapunov exponent. Use The Lyapunov time mirrors the limits of the predictability of the system. By convention, it is defined as the time for the distance between nearby trajectories of the system to increase by a factor of e. However, measures in terms of 2-foldings and 10-foldings are sometimes found, since they correspond to the loss of one bit of information or one digit of precision respectively. While it is used in many applications of dynamical systems theory, it has been particularly used in celestial mechanics where it is important for the problem of the stability of the Solar System. However, empirical estimation of the Lyapunov time is often associated with computational or inherent uncertainties. Examples Typical values are: See also Belousov–Zhabotinsky reaction Molecular chaos Three-body problem References Dynamical systems
Lyapunov time
[ "Physics", "Mathematics" ]
225
[ "Mechanics", "Dynamical systems" ]
1,117,979
https://en.wikipedia.org/wiki/VSEPR%20theory
Valence shell electron pair repulsion (VSEPR) theory ( , ) is a model used in chemistry to predict the geometry of individual molecules from the number of electron pairs surrounding their central atoms. It is also named the Gillespie-Nyholm theory after its two main developers, Ronald Gillespie and Ronald Nyholm. The premise of VSEPR is that the valence electron pairs surrounding an atom tend to repel each other. The greater the repulsion, the higher in energy (less stable) the molecule is. Therefore, the VSEPR-predicted molecular geometry of a molecule is the one that has as little of this repulsion as possible. Gillespie has emphasized that the electron-electron repulsion due to the Pauli exclusion principle is more important in determining molecular geometry than the electrostatic repulsion. The insights of VSEPR theory are derived from topological analysis of the electron density of molecules. Such quantum chemical topology (QCT) methods include the electron localization function (ELF) and the quantum theory of atoms in molecules (AIM or QTAIM). History The idea of a correlation between molecular geometry and number of valence electron pairs (both shared and unshared pairs) was originally proposed in 1939 by Ryutaro Tsuchida in Japan, and was independently presented in a Bakerian Lecture in 1940 by Nevil Sidgwick and Herbert Powell of the University of Oxford. In 1957, Ronald Gillespie and Ronald Sydney Nyholm of University College London refined this concept into a more detailed theory, capable of choosing between various alternative geometries. Overview VSEPR theory is used to predict the arrangement of electron pairs around central atoms in molecules, especially simple and symmetric molecules. A central atom is defined in this theory as an atom which is bonded to two or more other atoms, while a terminal atom is bonded to only one other atom. For example in the molecule methyl isocyanate (H3C-N=C=O), the two carbons and one nitrogen are central atoms, and the three hydrogens and one oxygen are terminal atoms. The geometry of the central atoms and their non-bonding electron pairs in turn determine the geometry of the larger whole molecule. The number of electron pairs in the valence shell of a central atom is determined after drawing the Lewis structure of the molecule, and expanding it to show all bonding groups and lone pairs of electrons. In VSEPR theory, a double bond or triple bond is treated as a single bonding group. The sum of the number of atoms bonded to a central atom and the number of lone pairs formed by its nonbonding valence electrons is known as the central atom's steric number. The electron pairs (or groups if multiple bonds are present) are assumed to lie on the surface of a sphere centered on the central atom and tend to occupy positions that minimize their mutual repulsions by maximizing the distance between them. The number of electron pairs (or groups), therefore, determines the overall geometry that they will adopt. For example, when there are two electron pairs surrounding the central atom, their mutual repulsion is minimal when they lie at opposite poles of the sphere. Therefore, the central atom is predicted to adopt a linear geometry. If there are 3 electron pairs surrounding the central atom, their repulsion is minimized by placing them at the vertices of an equilateral triangle centered on the atom. Therefore, the predicted geometry is trigonal. Likewise, for 4 electron pairs, the optimal arrangement is tetrahedral. As a tool in predicting the geometry adopted with a given number of electron pairs, an often used physical demonstration of the principle of minimal electron pair repulsion utilizes inflated balloons. Through handling, balloons acquire a slight surface electrostatic charge that results in the adoption of roughly the same geometries when they are tied together at their stems as the corresponding number of electron pairs. For example, five balloons tied together adopt the trigonal bipyramidal geometry, just as do the five bonding pairs of a PCl5 molecule. Steric number The steric number of a central atom in a molecule is the number of atoms bonded to that central atom, called its coordination number, plus the number of lone pairs of valence electrons on the central atom. In the molecule SF4, for example, the central sulfur atom has four ligands; the coordination number of sulfur is four. In addition to the four ligands, sulfur also has one lone pair in this molecule. Thus, the steric number is 4 + 1 = 5. Degree of repulsion The overall geometry is further refined by distinguishing between bonding and nonbonding electron pairs. The bonding electron pair shared in a sigma bond with an adjacent atom lies further from the central atom than a nonbonding (lone) pair of that atom, which is held close to its positively charged nucleus. VSEPR theory therefore views repulsion by the lone pair to be greater than the repulsion by a bonding pair. As such, when a molecule has 2 interactions with different degrees of repulsion, VSEPR theory predicts the structure where lone pairs occupy positions that allow them to experience less repulsion. Lone pair–lone pair (lp–lp) repulsions are considered stronger than lone pair–bonding pair (lp–bp) repulsions, which in turn are considered stronger than bonding pair–bonding pair (bp–bp) repulsions, distinctions that then guide decisions about overall geometry when 2 or more non-equivalent positions are possible. For instance, when 5 valence electron pairs surround a central atom, they adopt a trigonal bipyramidal molecular geometry with two collinear axial positions and three equatorial positions. An electron pair in an axial position has three close equatorial neighbors only 90° away and a fourth much farther at 180°, while an equatorial electron pair has only two adjacent pairs at 90° and two at 120°. The repulsion from the close neighbors at 90° is more important, so that the axial positions experience more repulsion than the equatorial positions; hence, when there are lone pairs, they tend to occupy equatorial positions as shown in the diagrams of the next section for steric number five. The difference between lone pairs and bonding pairs may also be used to rationalize deviations from idealized geometries. For example, the H2O molecule has four electron pairs in its valence shell: two lone pairs and two bond pairs. The four electron pairs are spread so as to point roughly towards the apices of a tetrahedron. However, the bond angle between the two O–H bonds is only 104.5°, rather than the 109.5° of a regular tetrahedron, because the two lone pairs (whose density or probability envelopes lie closer to the oxygen nucleus) exert a greater mutual repulsion than the two bond pairs. A bond of higher bond order also exerts greater repulsion since the pi bond electrons contribute. For example in isobutylene, (H3C)2C=CH2, the H3C−C=C angle (124°) is larger than the H3C−C−CH3 angle (111.5°). However, in the carbonate ion, , all three C−O bonds are equivalent with angles of 120° due to resonance. AXE method The "AXE method" of electron counting is commonly used when applying the VSEPR theory. The electron pairs around a central atom are represented by a formula AXmEn, where A represents the central atom and always has an implied subscript one. Each X represents a ligand (an atom bonded to A). Each E represents a lone pair of electrons on the central atom. The total number of X and E is known as the steric number. For example in a molecule AX3E2, the atom A has a steric number of 5. When the substituent (X) atoms are not all the same, the geometry is still approximately valid, but the bond angles may be slightly different from the ones where all the outside atoms are the same. For example, the double-bond carbons in alkenes like C2H4 are AX3E0, but the bond angles are not all exactly 120°. Likewise, SOCl2 is AX3E1, but because the X substituents are not identical, the X–A–X angles are not all equal. Based on the steric number and distribution of Xs and Es, VSEPR theory makes the predictions in the following tables. Main-group elements For main-group elements, there are stereochemically active lone pairs E whose number can vary between 0 to 3. Note that the geometries are named according to the atomic positions only and not the electron arrangement. For example, the description of AX2E1 as a bent molecule means that the three atoms AX2 are not in one straight line, although the lone pair helps to determine the geometry. Transition metals (Kepert model) The lone pairs on transition metal atoms are usually stereochemically inactive, meaning that their presence does not change the molecular geometry. For example, the hexaaquo complexes M(H2O)6 are all octahedral for M = V3+, Mn3+, Co3+, Ni2+ and Zn2+, despite the fact that the electronic configurations of the central metal ion are d2, d4, d6, d8 and d10 respectively. The Kepert model ignores all lone pairs on transition metal atoms, so that the geometry around all such atoms corresponds to the VSEPR geometry for AXn with 0 lone pairs E. This is often written MLn, where M = metal and L = ligand. The Kepert model predicts the following geometries for coordination numbers of 2 through 9: Examples The methane molecule (CH4) is tetrahedral because there are four pairs of electrons. The four hydrogen atoms are positioned at the vertices of a tetrahedron, and the bond angle is cos−1(−) ≈ 109° 28′. This is referred to as an AX4 type of molecule. As mentioned above, A represents the central atom and X represents an outer atom. The ammonia molecule (NH3) has three pairs of electrons involved in bonding, but there is a lone pair of electrons on the nitrogen atom. It is not bonded with another atom; however, it influences the overall shape through repulsions. As in methane above, there are four regions of electron density. Therefore, the overall orientation of the regions of electron density is tetrahedral. On the other hand, there are only three outer atoms. This is referred to as an AX3E type molecule because the lone pair is represented by an E. By definition, the molecular shape or geometry describes the geometric arrangement of the atomic nuclei only, which is trigonal-pyramidal for NH3. Steric numbers of 7 or greater are possible, but are less common. The steric number of 7 occurs in iodine heptafluoride (IF7); the base geometry for a steric number of 7 is pentagonal bipyramidal. The most common geometry for a steric number of 8 is a square antiprismatic geometry. Examples of this include the octacyanomolybdate () and octafluorozirconate () anions. The nonahydridorhenate ion () in potassium nonahydridorhenate is a rare example of a compound with a steric number of 9, which has a tricapped trigonal prismatic geometry. Steric numbers beyond 9 are very rare, and it is not clear what geometry is generally favoured. Possible geometries for steric numbers of 10, 11, 12, or 14 are bicapped square antiprismatic (or bicapped dodecadeltahedral), octadecahedral, icosahedral, and bicapped hexagonal antiprismatic, respectively. No compounds with steric numbers this high involving monodentate ligands exist, and those involving multidentate ligands can often be analysed more simply as complexes with lower steric numbers when some multidentate ligands are treated as a unit. Exceptions There are groups of compounds where VSEPR fails to predict the correct geometry. Some AX2E0 molecules The shapes of heavier Group 14 element alkyne analogues (RM≡MR, where M = Si, Ge, Sn or Pb) have been computed to be bent. Some AX2E2 molecules One example of the AX2E2 geometry is molecular lithium oxide, Li2O, a linear rather than bent structure, which is ascribed to its bonds being essentially ionic and the strong lithium-lithium repulsion that results. Another example is O(SiH3)2 with an Si–O–Si angle of 144.1°, which compares to the angles in Cl2O (110.9°), (CH3)2O (111.7°), and N(CH3)3 (110.9°). Gillespie and Robinson rationalize the Si–O–Si bond angle based on the observed ability of a ligand's lone pair to most greatly repel other electron pairs when the ligand electronegativity is greater than or equal to that of the central atom. In O(SiH3)2, the central atom is more electronegative, and the lone pairs are less localized and more weakly repulsive. The larger Si–O–Si bond angle results from this and strong ligand-ligand repulsion by the relatively large -SiH3 ligand. Burford et al showed through X-ray diffraction studies that Cl3Al–O–PCl3 has a linear Al–O–P bond angle and is therefore a non-VSEPR molecule. Some AX6E1 and AX8E1 molecules Some AX6E1 molecules, e.g. xenon hexafluoride (XeF6) and the Te(IV) and Bi(III) anions, , , , and , are octahedral, rather than pentagonal pyramids, and the lone pair does not affect the geometry to the degree predicted by VSEPR. Similarly, the octafluoroxenate ion () in nitrosonium octafluoroxenate(VI) is a square antiprism with minimal distortion, despite having a lone pair. One rationalization is that steric crowding of the ligands allows little or no room for the non-bonding lone pair; another rationalization is the inert-pair effect. Square planar ML4 complexes The Kepert model predicts that ML4 transition metal molecules are tetrahedral in shape, and it cannot explain the formation of square planar complexes. The majority of such complexes exhibit a d8 configuration as for the tetrachloroplatinate () ion. The explanation of the shape of square planar complexes involves electronic effects and requires the use of crystal field theory. Complexes with strong d-contribution Some transition metal complexes with low d electron count have unusual geometries, which can be ascribed to d subshell bonding interaction. Gillespie found that this interaction produces bonding pairs that also occupy the respective antipodal points (ligand opposed) of the sphere. This phenomenon is an electronic effect resulting from the bilobed shape of the underlying sdx hybrid orbitals. The repulsion of these bonding pairs leads to a different set of shapes. The gas phase structures of the triatomic halides of the heavier members of group 2, (i.e., calcium, strontium and barium halides, MX2), are not linear as predicted but are bent, (approximate X–M–X angles: CaF2, 145°; SrF2, 120°; BaF2, 108°; SrCl2, 130°; BaCl2, 115°; BaBr2, 115°; BaI2, 105°). It has been proposed by Gillespie that this is also caused by bonding interaction of the ligands with the d subshell of the metal atom, thus influencing the molecular geometry. Superheavy elements Relativistic effects on the electron orbitals of superheavy elements is predicted to influence the molecular geometry of some compounds. For instance, the 6d5/2 electrons in nihonium play an unexpectedly strong role in bonding, so NhF3 should assume a T-shaped geometry, instead of a trigonal planar geometry like its lighter congener BF3. In contrast, the extra stability of the 7p1/2 electrons in tennessine are predicted to make TsF3 trigonal planar, unlike the T-shaped geometry observed for IF3 and predicted for AtF3; similarly, OgF4 should have a tetrahedral geometry, while XeF4 has a square planar geometry and RnF4 is predicted to have the same. Odd-electron molecules The VSEPR theory can be extended to molecules with an odd number of electrons by treating the unpaired electron as a "half electron pair"—for example, Gillespie and Nyholm suggested that the decrease in the bond angle in the series (180°), NO2 (134°), (115°) indicates that a given set of bonding electron pairs exert a weaker repulsion on a single non-bonding electron than on a pair of non-bonding electrons. In effect, they considered nitrogen dioxide as an AX2E0.5 molecule, with a geometry intermediate between and . Similarly, chlorine dioxide (ClO2) is an AX2E1.5 molecule, with a geometry intermediate between and . Finally, the methyl radical (CH3) is predicted to be trigonal pyramidal like the methyl anion (), but with a larger bond angle (as in the trigonal planar methyl cation ()). However, in this case, the VSEPR prediction is not quite true, as CH3 is actually planar, although its distortion to a pyramidal geometry requires very little energy. See also Bent's rule (effect of ligand electronegativity) Comparison of software for molecular mechanics modeling Linear combination of atomic orbitals Molecular geometry Molecular modelling Molecular Orbital Theory (MOT) Thomson problem Valence Bond Theory (VBT) Valency interaction formula References Further reading External links VSEPR AR—3D VSEPR Theory Visualization with Augmented Reality app 3D Chem—Chemistry, structures, and 3D molecules Indiana University Molecular Structure Center (IUMSC) Chemistry theories Molecular geometry Stereochemistry Quantum chemistry
VSEPR theory
[ "Physics", "Chemistry" ]
3,824
[ "Quantum chemistry", "Molecular geometry", "Molecules", "Stereochemistry", "Quantum mechanics", "Theoretical chemistry", "Space", " molecular", "nan", "Atomic", "Spacetime", "Matter", " and optical physics" ]
12,293,814
https://en.wikipedia.org/wiki/Girt
In architecture or structural engineering, a girt, also known as a sheeting rail, is a horizontal structural member in a framed wall. Girts provide lateral support to the wall panel, primarily to resist wind loads. A comparable element in roof construction is a purlin. Stability in steel building construction The girt is commonly used as a stabilizing element to the primary structure (e.g. column, post). Wall cladding fastened to the girt, or a discrete bracing system which includes the girt, can provide shear resistance, in the plane of the wall, along the length of the primary member. Since the girts are normally fastened to, or near, the exterior flange of a column, stability braces may be installed at a girt to resist rotation of the unsupported, inner flange of the primary member. The girt system must be competent and adequately stiff to provide the required stabilizing resistance in addition to its role as a wall panel support. Girts are stabilized by (sag) rods/angles/straps and by the wall cladding. Stabilizing rods are discrete brace members to prevent rotation of an unsupported flange of the girt. Sheet metal wall panels are usually considered providing lateral bracing to the connected, typically exterior flange along the length of the girt. Under restricted circumstances, sheet metal wall panels are also capable of providing rotational restraint to the girt section. In general: Girt supports panel, panel stabilizes girt; Column supports girt, girt stabilizes column. The building designer should be knowledgeable in the complexities of this interactive design condition to ensure competent design of the complete structure. See also Girder References Construction Structural system Timber framing de:Riegel (Bauteil) pl:Rygiel
Girt
[ "Technology", "Engineering" ]
372
[ "Structural engineering", "Timber framing", "Building engineering", "Structural system", "Construction", "Architecture stubs", "Architecture" ]
12,295,288
https://en.wikipedia.org/wiki/Hampson%E2%80%93Linde%20cycle
The Hampson–Linde cycle is a process for the liquefaction of gases, especially for air separation. William Hampson and Carl von Linde independently filed for patents of the cycle in 1895: Hampson on 23 May 1895 and Linde on 5 June 1895. The Hampson–Linde cycle introduced regenerative cooling, a positive-feedback cooling system. The heat exchanger arrangement permits an absolute temperature difference (e.g. J–T cooling for air) to go beyond a single stage of cooling and can reach the low temperatures required to liquefy "fixed" gases. The Hampson–Linde cycle differs from the Siemens cycle only in the expansion step. Whereas the Siemens cycle has the gas do external work to reduce its temperature, the Hampson–Linde cycle relies solely on the Joule–Thomson effect; this has the advantage that the cold side of the cooling apparatus needs no moving parts. The cycle The cooling cycle proceeds in several steps: The gas is compressed, which adds external energy into the gas, to give it what is needed for running through the cycle. Linde's US patent gives an example with the low side pressure of and high side pressure of . The high pressure gas is then cooled by immersing the gas in a cooler environment; the gas loses some of its energy (heat). Linde's patent example gives an example of brine at 10°C. The high pressure gas is further cooled with a countercurrent heat exchanger; the cooler gas leaving the last stage cools the gas going to the last stage. The gas is further cooled by passing the gas through a Joule–Thomson orifice (expansion valve); the gas is now at the lower pressure. The low pressure gas is now at its coolest in the current cycle. Some of the gas condenses and becomes output product. The low pressure gas is directed back to the countercurrent heat exchanger to cool the warmer, incoming, high-pressure gas. After leaving the countercurrent heat exchanger, the gas is warmer than it was at its coldest, but cooler than it started out at step 1. The gas is sent back to the compressor, mixed with warm incoming makeup gas (to replace condensed product), and returned to the compressor to make another trip through the cycle (and become still colder). In each cycle the net cooling is more than the heat added at the beginning of the cycle. As the gas passes more cycles and becomes cooler, reaching lower temperatures at the expansion valve becomes more difficult. References Further reading Thermodynamic cycles Cryogenics Industrial gases 1895 in science 1895 in Germany
Hampson–Linde cycle
[ "Physics", "Chemistry" ]
541
[ "Chemical process engineering", "Applied and interdisciplinary physics", "Cryogenics", "Industrial gases" ]
12,302,694
https://en.wikipedia.org/wiki/Thermomechanical%20generator
The Harwell TMG Stirling engine, an abbreviation for "Thermo-Mechanical Generator", was invented in 1967 by E. H. Cooke-Yarborough at the Harwell Labs of the United Kingdom Atomic Energy Authority. It was intended to be a remote electrical power source with low cost and very long life, albeit by sacrificing some efficiency. The TMG (model TMG120) was at one time the only Stirling engine sold by a manufacturer, namely HoMach Systems Ltd., England. Description The engine has near isothermal cylinders because 1) the heater area covers the entire cylinder end, 2) it is a short stroke device, with wide shallow cylinders, yielding a high surface area to volume ratio, 3) the average thickness of the gas space is about 0.1 cm, and 4) the working fluid is Helium, a gas having good thermal properties for Stirling engines. The engine's displacer also has very low losses. These low-loss operating characteristics simplify the engine analysis, compared to more conventional Stirling engines. The design has many advantages over conventional Stirling engines. The simplicity of the heater greatly reduces the cost by allowing the TMG to avoid the need for a brazed tubular or finned heater, which can account for 40% of the cost of a conventional Stirling engine. The heat exchangers for the heater and cooler are mechanically trivial. The regenerator is a simple annulus, referred to as a "flat plate". Along with the cylinder wall and the displacer, there are a total of four regenerating surfaces. The TMG is a free piston engine. There are no rolling bearings or sliding seals, thus there is very little friction or wear. The working space is hermetically sealed, allowing it to contain pressurized helium gas for many thousands of hours. The displacer is a stainless steel can, 27 cm in diameter. It is suspended by a low-loss planer metal spring centered in a 27.4 cm diameter cylinder. The 2 mm radial clearance is divided into two concentric annular gaps by a thin, open-ended cylinder, which is fixed to the engine's cylinder. This annulus acts as the regenerator, which is much less costly than a wire-mesh type. The engine is a "free-cylinder" design, in which the entire engine is mounted on springs and allowed to vibrate slightly. This allows the displacer to be driven by positive feedback from the motion of the power piston and the magnets in the linear-alternator magnets, which have a combined weight of 10 kg. The unique power piston was invented by Cooke-Yarborough, and is called an "articulated diaphragm". It consists of a stainless steel annulus, with an outer diameter of 35 cm and an inner diameter of 26 cm. This annulus is clamped to the engine on the outer edge by two flexible rubber o-rings, and on the inner edge it is similarly clamped, in this case to a rigid center hub that makes up the piston's center. The o-rings flex but do not slide, thus no lubricant is needed and there is negligible wear in the entire machine. The compression space is located between the power-piston hub and the displacer, and this space is cooled by direct conduction through the power piston. A developmental model of the TMG contained a double articulated diaphragm containing cooling water, which was pumped by a thermosyphon. The depth of the compression space varies from 0.2 to 2.7 mm, as governed by the 2 mm displacer stroke and the 1.5 mm power piston stroke moving 90 degrees out of phase. The TMG engine successfully overcomes many of the economic and mechanical difficulties common in conventional Stirling engines. However, there are some limitations of this design. The simple, low-cost annular regenerator is inefficient compared to other types, (and this contributes to this engine's somewhat low thermal efficiency of only 10%). The mechanical limitations of the articulated diaphragm only allow a maximum stroke of an estimated 3 mm. These properties limit the maximum obtainable power to about 500 - 1000 Watts from an engine of this design. Nevertheless, it is rare for a low-cost Stirling engine to obtain this high level of reliability and operating life, which can only be attributed to the ingenuity of the design. References Mechanical engineering Hot air engines Piston engines
Thermomechanical generator
[ "Physics", "Technology", "Engineering" ]
936
[ "Piston engines", "Engines", "Applied and interdisciplinary physics", "Mechanical engineering" ]
12,305,030
https://en.wikipedia.org/wiki/Anderson%20impurity%20model
The Anderson impurity model, named after Philip Warren Anderson, is a Hamiltonian that is used to describe magnetic impurities embedded in metals. It is often applied to the description of Kondo effect-type problems, such as heavy fermion systems and Kondo insulators. In its simplest form, the model contains a term describing the kinetic energy of the conduction electrons, a two-level term with an on-site Coulomb repulsion that models the impurity energy levels, and a hybridization term that couples conduction and impurity orbitals. For a single impurity, the Hamiltonian takes the form , where the operator is the annihilation operator of a conduction electron, and is the annihilation operator for the impurity, is the conduction electron wavevector, and labels the spin. The on–site Coulomb repulsion is , and gives the hybridization. Regimes The model yields several regimes that depend on the relationship of the impurity energy levels to the Fermi level : The empty orbital regime for or , which has no local moment. The intermediate regime for or . The local moment regime for , which yields a magnetic moment at the impurity. In the local moment regime, the magnetic moment is present at the impurity site. However, for low enough temperature, the moment is Kondo screened to give non-magnetic many-body singlet state. Heavy-fermion systems For heavy-fermion systems, a lattice of impurities is described by the periodic Anderson model. The one-dimensional model is , where is the position of impurity site , and is the impurity creation operator (used instead of by convention for heavy-fermion systems). The hybridization term allows f-orbital electrons in heavy fermion systems to interact, although they are separated by a distance greater than the Hill limit. Other variants There are other variants of the Anderson model, such as the SU(4) Anderson model, which is used to describe impurities which have an orbital, as well as a spin, degree of freedom. This is relevant in carbon nanotube quantum dot systems. The SU(4) Anderson model Hamiltonian is , where and label the orbital degree of freedom (which can take one of two values), and represents the number operator for the impurity. See also Hubbard model Kondo effect Kondo model Anderson localization References Quantum lattice models Condensed matter physics
Anderson impurity model
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
498
[ "Phases of matter", "Quantum mechanics", "Materials science", "Condensed matter physics", "Matter", "Quantum lattice models" ]
9,896,434
https://en.wikipedia.org/wiki/Prokaryotic%20DNA%20replication
Prokaryotic DNA Replication is the process by which a prokaryote duplicates its DNA into another copy that is passed on to daughter cells. Although it is often studied in the model organism E. coli, other bacteria show many similarities. Replication is bi-directional and originates at a single origin of replication (OriC). It consists of three steps: Initiation, elongation, and termination. Initiation All cells must finish DNA replication before they can proceed for cell division. Media conditions that support fast growth in bacteria also couples with shorter inter-initiation time in them, i.e. the doubling time in fast growing cells is less as compared to the slow growth. In other words, it is possible that in fast growth conditions the grandmother cells starts replicating its DNA for grand daughter cell. For the same reason, the initiation of DNA replication is highly regulated. Bacterial origins regulate orisome assembly, a nuclei-protein complex assembled on the origin responsible for unwinding the origin and loading all the replication machinery. In E. coli, the direction for orisome assembly are built into a short stretch of nucleotide sequence called as origin of replication (oriC) which contains multiple binding sites for the initiator protein DnaA (a highly homologous protein amongst bacterial kingdom). DnaA has four domains with each domain responsible for a specific task. There are 11 DnaA binding sites/boxes on the E. coli origin of replication out of which three boxes R1, R2 and R4 (which have a highly conserved 9 bp consensus sequence 5' - TTATC/ACACA ) are high affinity DnaA boxes. They bind to DnaA-ADP and DnaA-ATP with equal affinities and are bound by DnaA throughout most of the cell cycle and forms a scaffold on which rest of the orisome assembles. The rest eight DnaA boxes are low affinity sites that preferentially bind to DnaA-ATP. During initiation, DnaA bound to high affinity DnaA box R4 donates additional DnaA to the adjacent low affinity site and progressively fill all the low affinity DnaA boxes. Filling of the sites changes origin conformation from its native state. It is hypothesized that DNA stretching by DnaA bound to the origin promotes strand separation which allows more DnaA to bind to the unwound region. The DnaC helicase loader then interacts with the DnaA bound to the single-stranded DNA to recruit the DnaB helicase, which will continue to unwind the DNA as the DnaG primase lays down an RNA primer and DNA Polymerase III holoenzyme begins elongation. Regulation Chromosome replication in bacteria is regulated at the initiation stage. DnaA-ATP is hydrolyzed into the inactive DnaA-ADP by RIDA (Regulatory Inactivation of DnaA), and converted back to the active DnaA-ATP form by DARS (DnaA Reactivating Sequence, which is itself regulated by Fis and IHF). However, the main source of DnaA-ATP is synthesis of new molecules. Meanwhile, several other proteins interact directly with the oriC sequence to regulate initiation, usually by inhibition. In E. coli these proteins include DiaA, SeqA, IciA, HU, and ArcA-P, but they vary across other bacterial species. A few other mechanisms in E. coli that variously regulate initiation are DDAH (datA-Dependent DnaA Hydrolysis, which is also regulated by IHF), inhibition of the dnaA gene (by the SeqA protein), and reactivation of DnaA by the lipid membrane. Elongation Once priming is complete, DNA polymerase III holoenzyme is loaded into the DNA and replication begins. The catalytic mechanism of DNA polymerase III involves the use of two metal ions in the active site, and a region in the active site that can discriminate between deoxyribonucleotides and ribonucleotides. The metal ions are general divalent cations that help the 3' OH initiate a nucleophilic attack onto the alpha phosphate of the deoxyribonucleotide and orient and stabilize the negatively charged triphosphate on the deoxyribonucleotide. Nucleophilic attack by the 3' OH on the alpha phosphate releases pyrophosphate, which is then subsequently hydrolyzed (by inorganic phosphatase) into two phosphates. This hydrolysis drives DNA synthesis to completion. Furthermore, DNA polymerase III must be able to distinguish between correctly paired bases and incorrectly paired bases. This is accomplished by distinguishing Watson-Crick base pairs through the use of an active site pocket that is complementary in shape to the structure of correctly paired nucleotides. This pocket has a tyrosine residue that is able to form van der Waals interactions with the correctly paired nucleotide. In addition, dsDNA (double stranded DNA) in the active site has a wider major groove and shallower minor groove that permits the formation of hydrogen bonds with the third nitrogen of purine bases and the second oxygen of pyrimidine bases. Finally, the active site makes extensive hydrogen bonds with the DNA backbone. These interactions result in the DNA polymerase III closing around a correctly paired base. If a base is inserted and incorrectly paired, these interactions could not occur due to disruptions in hydrogen bonding and van der Waals interactions. DNA is read in the 3' → 5' direction, therefore, nucleotides are synthesized (or attached to the template strand) in the 5' → 3' direction. However, one of the parent strands of DNA is 3' → 5' while the other is 5' → 3'. To solve this, replication occurs in opposite directions. Heading towards the replication fork, the leading strand is synthesized in a continuous fashion, only requiring one primer. On the other hand, the lagging strand, heading away from the replication fork, is synthesized in a series of short fragments known as Okazaki fragments, consequently requiring many primers. The RNA primers of Okazaki fragments are subsequently degraded by RNase H and DNA Polymerase I (exonuclease), and the gaps (or nicks) are filled with deoxyribonucleotides and sealed by the enzyme ligase. Rate of replication The rate of DNA replication in a living cell was first measured as the rate of phage T4 DNA elongation in phage-infected E. coli. During the period of exponential DNA increase at 37 °C, the rate was 749 nucleotides per second. The mutation rate per base pair per replication during phage T4 DNA synthesis is 1.7 per 108. Termination Termination of DNA replication in E. coli is completed through the use of termination sequences and the Tus protein. These sequences allow the two replication forks to pass through in only one direction, but not the other. DNA replication initially produces two catenated or linked circular DNA duplexes, each comprising one parental strand and one newly synthesised strand (by nature of semiconservative replication). This catenation can be visualised as two interlinked rings which cannot be separated. Topoisomerase 2 in E. coli unlinks or decatenates the two circular DNA duplexes by breaking the phosphodiester bonds present in two successive nucleotides of either parent DNA or newly formed DNA and thereafter the ligating activity ligates that broken DNA strand and so the two DNA get formed. Other Prokaryotic replication models The theta type replication has been already mentioned. There are other types of prokaryotic replication such as rolling circle replication and D-loop replication Rolling Circle Replication This is seen in bacterial conjugation where the same circulartemplate DNA rotates and around it the new strand develops. . When conjugation is initiated by a signal the relaxase enzyme creates a nick in one of the strands of the conjugative plasmid at the oriT. Relaxase may work alone or in a complex of over a dozen proteins known collectively as a relaxosome. In the F-plasmid system the relaxase enzyme is called TraI and the relaxosome consists of TraI, TraY, TraM and the integrated host factor IHF. The nicked strand, or T-strand, is then unwound from the unbroken strand and transferred to the recipient cell in a 5'-terminus to 3'-terminus direction. The remaining strand is replicated either independent of conjugative action (vegetative replication beginning at the oriV) or in concert with conjugation (conjugative replication similar to the rolling circle replication of lambda phage). Conjugative replication may require a second nick before successful transfer can occur. A recent report claims to have inhibited conjugation with chemicals that mimic an intermediate step of this second nicking event. D-loop replication D-loop replication is mostly seen in organellar DNA, Where a triple stranded structure called displacement loop is formed. References DNA replication
Prokaryotic DNA replication
[ "Biology" ]
1,887
[ "Genetics techniques", "DNA replication", "Molecular genetics" ]
9,896,453
https://en.wikipedia.org/wiki/Eukaryotic%20DNA%20replication
Eukaryotic DNA replication is a conserved mechanism that restricts DNA replication to once per cell cycle. Eukaryotic DNA replication of chromosomal DNA is central for the duplication of a cell and is necessary for the maintenance of the eukaryotic genome. DNA replication is the action of DNA polymerases synthesizing a DNA strand complementary to the original template strand. To synthesize DNA, the double-stranded DNA is unwound by DNA helicases ahead of polymerases, forming a replication fork containing two single-stranded templates. Replication processes permit copying a single DNA double helix into two DNA helices, which are divided into the daughter cells at mitosis. The major enzymatic functions carried out at the replication fork are well conserved from prokaryotes to eukaryotes, but the replication machinery in eukaryotic DNA replication is a much larger complex, coordinating many proteins at the site of replication, forming the replisome. The replisome is responsible for copying the entirety of genomic DNA in each proliferative cell. This process allows for the high-fidelity passage of hereditary/genetic information from parental cell to daughter cell and is thus essential to all organisms. Much of the cell cycle is built around ensuring that DNA replication occurs without errors. In G1 phase of the cell cycle, many of the DNA replication regulatory processes are initiated. In eukaryotes, the vast majority of DNA synthesis occurs during S phase of the cell cycle, and the entire genome must be unwound and duplicated to form two daughter copies. During G2, any damaged DNA or replication errors are corrected. Finally, one copy of the genomes is segregated into each daughter cell at the mitosis or M phase. These daughter copies each contains one strand from the parental duplex DNA and one nascent antiparallel strand. This mechanism is conserved from prokaryotes to eukaryotes and is known as semiconservative DNA replication. The process of semiconservative replication for the site of DNA replication is a fork-like DNA structure, the replication fork, where the DNA helix is open, or unwound, exposing unpaired DNA nucleotides for recognition and base pairing for the incorporation of free nucleotides into double-stranded DNA. Initiation Initiation of eukaryotic DNA replication is the first stage of DNA synthesis where the DNA double helix is unwound and an initial priming event by DNA polymerase α occurs on the leading strand. The priming event on the lagging strand establishes a replication fork. Priming of the DNA helix consists of the synthesis of an RNA primer to allow DNA synthesis by DNA polymerase α. Priming occurs once at the origin on the leading strand and at the start of each Okazaki fragment on the lagging strand. Origin of replication Replication starts at origins of replication. DNA sequences containing these sites were initially isolated in the late 1970s on the basis of their ability to support replication of plasmids, hence the designation of autonomously replicating sequences (ARS). Origins vary widely in their efficiency, with some being used in almost every cell cycle while others may be used in only one in one thousand S phases. The total number of yeast ARSs is at least 1600, but may be more than 5000 if less active sites are counted, that is, there may be an ARS every 2000 to 8000 base pairs. Pre-replicative complex Multiple replicative proteins assemble on and dissociate from these replicative origins to initiate DNA replication. with the formation of the pre-replication complex (pre-RC) being a key intermediate in the replication initiation process. Association of the origin recognition complex (ORC) with a replication origin recruits the cell division cycle 6 protein (Cdc6) to form a platform for the loading of the minichromosome maintenance (Mcm 2–7) complex proteins, facilitated by the chromatin licensing and DNA replication factor 1 protein (Cdt1). The ORC, Cdc6, and Cdt1 together are required for the stable association of the Mcm2-7 complex with replicative origins during the G1 phase of the cell cycle. Eukaryotic origins of replication control the formation of several protein complexes that lead to the assembly of two bidirectional DNA replication forks. These events are initiated by the formation of the pre-replication complex (pre-RC) at the origins of replication. This process takes place in the G1 stage of the cell cycle. The pre-RC formation involves the ordered assembly of many replication factors including the origin recognition complex (ORC), Cdc6 protein, Cdt1 protein, and minichromosome maintenance proteins (Mcm2-7). Once the pre-RC is formed, activation of the complex is triggered by two kinases, cyclin-dependent kinase 2 (CDK) and Dbf4-dependent kinase (DDK) that help transition the pre-RC to the initiation complex before the initiation of DNA replication. This transition involves the ordered assembly of additional replication factors to unwind the DNA and accumulate the multiple eukaryotic DNA polymerases around the unwound DNA. Central to the question of how bidirectional replication forks are established at replication origins is the mechanism by which ORC recruits two head-to-head Mcm2-7 complexes to every replication origin to form the pre-replication complex. Origin recognition complex The first step in the assembly of the pre-replication complex (pre-RC) is the binding of the origin recognition complex (ORC) to the replication origin. In late mitosis, the Cdc6 protein joins the bound ORC followed by binding the Cdt1-Mcm2-7 complex. ORC, Cdc6, and Cdt1 are all required to load the six protein minichromosome maintenance (Mcm 2–7) complex onto the DNA. The ORC is a six-subunit, Orc1p-6, protein complex that selects the replicative origin sites on DNA for initiation of replication and ORC binding to chromatin is regulated through the cell cycle. Generally, the function and size of the ORC subunits are conserved throughout many eukaryotic genomes with the difference being their diverged DNA binding sites. The most widely studied origin recognition complex is that of Saccharomyces cerevisiae or yeast which is known to bind to the autonomously replicating sequence (ARS). The S. cerevisiae ORC interacts specifically with both the A and B1 elements of yeast origins of replication, spanning a region of 30 base pairs. The binding to these sequences requires ATP. The atomic structure of the S. cerevisiae ORC bound to ARS DNA has been determined. Orc1, Orc2, Orc3, Orc4, and Orc5 encircle the A element by means of two types of interactions, base non-specific and base-specific, that bend the DNA at the A element. All five subunits contact the sugar phosphate backbone at multiple points of the A element to form a tight grip without base specificity. Orc1 and Orc2 contact the minor groove of the A element while a winged helix domain of Orc4 contacts the methyl groups of the invariant Ts in the major groove of the A element via an insertion helix (IH). The absence of this IH in metazoans explains the lack of sequence specificity in human ORC. Removing the IH from the ScORC causes it to lose its specificity for the A element, and to bind promiscuously and preferentially (83%) to promoter regions. The ARS DNA is also bent at the B1 element through interactions with Orc2, Orc5 and Orc6. The bending of origin DNA by ORC appears to be evolutionarily conserved suggesting that it may be required for the Mcm2-7 complex loading mechanism. When the ORC binds to DNA at replication origins, it serves as a scaffold for the assembly of other key initiation factors of the pre-replicative complex. This pre-replicative complex assembly during the G1 stage of the cell cycle is required prior to the activation of DNA replication during the S phase. The removal of at least part of the complex (Orc1) from the chromosome at metaphase is part of the regulation of mammalian ORC to ensure that the pre-replicative complex formation prior to the completion of metaphase is eliminated. Cdc6 protein Binding of the cell division cycle 6 (Cdc6) protein to the origin recognition complex (ORC) is an essential step in the assembly of the pre-replication complex (pre-RC) at the origins of replication. Cdc6 binds to the ORC on DNA in an ATP-dependent manner, which induces a change in the pattern of origin binding that requires Orc1 ATPase. Cdc6 requires ORC in order to associate with chromatin and is in turn required for the Cdt1-Mcm2-7 heptamer to bind to the chromatin. The ORC-Cdc6 complex forms a ring-shaped structure and is analogous to other ATP-dependent protein machines. The levels and activity of Cdc6 regulate the frequency with which the origins of replication are utilized during the cell cycle. Cdt1 protein The chromatin licensing and DNA replication factor 1 (Cdt1) protein is required for the licensing of chromatin for DNA replication. In S. cerevisiae, Cdt1 facilitates the loading of the Mcm2-7 complex one at a time onto the chromosome by stabilising the left-handed open-ring structure of the Mcm2-7 single hexamer. Cdt1 has been shown to associate with the C terminus of Cdc6 to cooperatively promote the association of Mcm proteins to the chromatin. The cryo-EM structure of the OCCM (ORC-Cdc6-Cdt1-MCM) complex shows that the Cdt1-CTD interacts with the Mcm6-WHD. In metazoans, Cdt1 activity during the cell cycle is tightly regulated by its association with the protein geminin, which both inhibits Cdt1 activity during S phase in order to prevent re-replication of DNA and prevents it from ubiquitination and subsequent proteolysis. Minichromosome maintenance protein complex The minichromosome maintenance (Mcm) proteins were named after a genetic screen for DNA replication initiation mutants in S. cerevisiae that affect plasmid stability in an ARS-specific manner. Mcm2, Mcm3, Mcm4, Mcm5, Mcm6 and Mcm7 form a hexameric complex that has an open-ring structure with a gap between Mcm2 and Mcm5. The assembly of the Mcm proteins onto chromatin requires the coordinated function of the origin recognition complex (ORC), Cdc6, and Cdt1. Once the Mcm proteins have been loaded onto the chromatin, ORC and Cdc6 can be removed from the chromatin without preventing subsequent DNA replication. This observation suggests that the primary role of the pre-replication complex is to correctly load the Mcm proteins. The Mcm proteins on chromatin form a head-to-head double hexamer with the two rings slightly tilted, twisted and off-centred to create a kink in the central channel where the bound DNA is captured at the interface of the two rings. Each hexameric Mcm2-7 ring first serves as the scaffold for the assembly of the replisome and then as the core of the catalytic CMG (Cdc45-MCM-GINS) helicase, which is a main component of the replisome. Each Mcm protein is highly related to all others, but unique sequences distinguishing each of the subunit types are conserved across eukaryotes. All eukaryotes have exactly six Mcm protein analogs that each fall into one of the existing classes (Mcm2-7), indicating that each Mcm protein has a unique and important function. Minichromosome maintenance proteins are required for DNA helicase activity. Inactivation of any of the six Mcm proteins during S phase irreversibly prevents further progression of the replication fork suggesting that the helicase cannot be recycled and must be assembled at replication origins. Along with the minichromosome maintenance protein complex helicase activity, the complex also has associated ATPase activity. Studies have shown that within the Mcm protein complex are specific catalytic pairs of Mcm proteins that function together to coordinate ATP hydrolysis. These studies, confirmed by cryo-EM structures of the Mcm2-7 complexes, showed that the Mcm complex is a hexamer with subunits arranged in a ring in the order of Mcm2-Mcm6-Mcm4-Mcm7-Mcm3-Mcm5-. Both members of each catalytic pair contribute to the conformation that allows ATP binding and hydrolysis and the mixture of active and inactive subunits presumably allows the Mcm hexameric complex to complete ATP binding and hydrolysis as a whole to create a coordinated ATPase activity. The nuclear localization of the minichromosome maintenance proteins is regulated in budding yeast cells. The Mcm proteins are present in the nucleus in G1 stage and S phase of the cell cycle, but are exported to the cytoplasm during the G2 stage and M phase. A complete and intact six subunit Mcm complex is required to enter into the cell nucleus. In S. cerevisiae, nuclear export is promoted by cyclin-dependent kinase (CDK) activity. Mcm proteins that are associated with chromatin are protected from CDK export machinery due to the lack of accessibility to CDK. Initiation complex During the G1 stage of the cell cycle, the replication initiation factors, origin recognition complex (ORC), Cdc6, Cdt1, and minichromosome maintenance (Mcm) protein complex, bind sequentially to DNA to form a head-to-head dimer of the MCM ring complex, known as the pre-replication complex (pre-RC). While the yeast pre-RC forms a closed DNA complex, the human pre-RC forms an open complex. At the transition of the G1 stage to the S phase of the cell cycle, S phase–specific cyclin-dependent protein kinase (CDK) and Cdc7/Dbf4 kinase (DDK) transform the inert pre-RC into an active complex capable of assembling two bidirectional replisomes. CryoEM structures showed that two DDKs independently dock onto the interface of the MCM double hexamer straddling across the two rings. The sequential phosphorylation of multiple substrates on the NTEs of Mcm4, Mcm2 and Mcm6 is achieved by a wobble mechanism whereby Dbf4 assumes different wobble states to position Cdc7 over its multiple substrates. Phosphorylation of the MCM double hexamer, the Mcm4-NSD in particular, by DDK is essential for viability in yeast. The recruitment of Cdc45 and GINS follows after the activation of the MCMs by DDK and CDK. Cdc45 protein Cell division cycle 45 (Cdc45) protein is a critical component for the conversion of the pre-replicative complex to the initiation complex. The Cdc45 protein assembles at replication origins before initiation and is required for replication to begin in Saccharomyces cerevisiae, and has an essential role during elongation. Thus, Cdc45 has central roles in both initiation and elongation phases of chromosomal DNA replication. Cdc45 associates with chromatin after the beginning of initiation in late G1 stage and during the S phase of the cell cycle. Cdc45 physically associates with Mcm5 and displays genetic interactions with five of the six members of the Mcm gene family and the ORC2 gene. The loading of Cdc45 onto chromatin is critical for loading other various replication proteins, including DNA polymerase α, DNA polymerase ε, replication protein A (RPA) and proliferating cell nuclear antigen (PCNA) onto chromatin. Within a Xenopus nucleus-free system, it has been demonstrated that Cdc45 is required for the unwinding of plasmid DNA. The Xenopus nucleus-free system also demonstrates that DNA unwinding and tight RPA binding to chromatin occurs only in the presence of Cdc45. Binding of Cdc45 to chromatin depends on Clb-Cdc28 kinase activity as well as functional Cdc6 and Mcm2, which suggests that Cdc45 associates with the pre-RC after activation of S-phase cyclin-dependent kinases (CDKs). As indicated by the timing and the CDK dependence, binding of Cdc45 to chromatin is crucial for commitment to initiation of DNA replication. During S phase, Cdc45 physically interacts with Mcm proteins on chromatin; however, dissociation of Cdc45 from chromatin is slower than that of the Mcm, which indicates that the proteins are released by different mechanisms. GINS The six minichromosome maintenance proteins and Cdc45 are essential during initiation and elongation for the movement of replication forks and for unwinding of the DNA. GINS are essential for the interaction of Mcm and Cdc45 at the origins of replication during initiation and then at DNA replication forks as the replisome progresses. The GINS complex is composed of four small proteins Sld5 (Cdc105), Psf1 (Cdc101), Psf2 (Cdc102) and Psf3 (Cdc103), GINS represents 'go, ichi, ni, san' which means '5, 1, 2, 3' in Japanese. Cdc45, Mcm2-7 and GINS together form the CMG helicase, the replicative helicase of the replisome. Although the Mcm2-7 complex alone has weak helicase activity Cdc45 and GINS are required for robust helicase activity Mcm10 Mcm10 is essential for chromosome replication and interacts with the minichromosome maintenance 2-7 helicase that is loaded in an inactive form at origins of DNA replication. Mcm10 also chaperones the catalytic DNA polymerase α and helps stabilize the polymerase at replication forks. DDK and CDK kinases At the onset of S phase, the pre-replicative complex must be activated by two S phase-specific kinases in order to form an initiation complex at an origin of replication. One kinase is the Cdc7-Dbf4 kinase called Dbf4-dependent kinase (DDK) and the other is cyclin-dependent kinase (CDK). Chromatin-binding assays of Cdc45 in yeast and Xenopus have shown that a downstream event of CDK action is loading of Cdc45 onto chromatin. Cdc6 has been speculated to be a target of CDK action, because of the association between Cdc6 and CDK, and the CDK-dependent phosphorylation of Cdc6. The CDK-dependent phosphorylation of Cdc6 has been considered to be required for entry into the S phase. Both the catalytic subunits of DDK, Cdc7, and the activator protein, Dbf4, are conserved in eukaryotes and are required for the onset of S phase of the cell cycle. Both Dbf4 and Cdc7 are required for the loading of Cdc45 onto chromatin origins of replication. The target for binding of the DDK kinase is the chromatin-bound form of the Mcm complex. High resolution cryoEM structures showed that the Dbf4 subunit of DDK straddles across the hexamer interface of the DNA-bound MCM-DH, contacting Mcm2 of one hexamer and Mcm4/6 of the opposite hexamer. Mcm2, Mcm4 and Mcm6 are all substrates of phosphorylation by DDK but only the N-terminal serine/threonine-rich domain (NSD) of Mcm4 is an essential DDK target. Phosphorylation of the NSD leads to the activation of Mcm helicase activity. Dpb11, Sld3, and Sld2 proteins Sld3, Sld2, and Dpb11 interact with many replication proteins. Sld3 and Cdc45 form a complex that associated with the pre-RC at the early origins of replication even in the G11 phase and with the later origins of replication in the S phase in a mutually Mcm-dependent manner. Dpb11 and Sld2 interact with Polymerase ɛ and cross-linking experiments have indicated that Dpb11 and Polymerase ɛ coprecipitate in the S phase and associate with replication origins. Sld3 and Sld2 are phosphorylated by CDK, which enables the two replicative proteins to bind to Dpb11. Dpb11 had two pairs of BRCA1 C Terminus (BRCT) domains which are known as a phosphopeptide-binding domains. The N-terminal pair of the BRCT domains binds to phosphorylated Sld3, and the C-terminal pair binds to phosphorylated Sld2. Both of these interactions are essential for CDK-dependent activation of DNA budding in yeast. Dpb11 also interacts with GINS and participates in the initiation and elongation steps of chromosomal DNA replication. GINS are one of the replication proteins found at the replication forks and forms a complex with Cdc45 and Mcm. These phosphorylation-dependent interactions between Dpb11, Sld2, and Sld3 are essential for CDK-dependent activation of DNA replication, and by using cross-linking reagents within some experiments, a fragile complex was identified called the pre-loading complex (pre-LC). This complex contains Pol ɛ, GINS, Sld2, and Dpb11. The pre-LC is found to form before any association with the origins in a CDK-dependent and DDK-dependent manner and CDK activity regulates the initiation of DNA replication through the formation of the pre-LC. Elongation The formation of the pre-replicative complex (pre-RC) marks the potential sites for the initiation of DNA replication. Consistent with the minichromosome maintenance complex encircling double stranded DNA, formation of the pre-RC does not lead to the immediate unwinding of origin DNA or the recruitment of DNA polymerases. Instead, the pre-RC that is formed during the G1 of the cell cycle is only activated to unwind the DNA and initiate replication after the cells pass from the G1 to the S phase of the cell cycle. Once the initiation complex is formed and the cells pass into the S phase, the complex then becomes a replisome. The eukaryotic replisome complex is responsible for coordinating DNA replication. Replication on the leading and lagging strands is performed by DNA polymerase ε and DNA polymerase δ. Many replisome factors including Claspin, And1, replication factor C clamp loader and the fork protection complex are responsible for regulating polymerase functions and coordinating DNA synthesis with the unwinding of the template strand by Cdc45-Mcm-GINS complex. As the DNA is unwound the twist number decreases. To compensate for this the writhe number increases, introducing positive supercoils in the DNA. These supercoils would cause DNA replication to halt if they were not removed. Topoisomerases are responsible for removing these supercoils ahead of the replication fork. The replisome is responsible for copying the entire genomic DNA in each proliferative cell. The base pairing and chain formation reactions, which form the daughter helix, are catalyzed by DNA polymerases. These enzymes move along single-stranded DNA and allow for the extension of the nascent DNA strand by "reading" the template strand and allowing for incorporation of the proper purine nucleobases, adenine and guanine, and pyrimidine nucleobases, thymine and cytosine. Activated free deoxyribonucleotides exist in the cell as deoxyribonucleotide triphosphates (dNTPs). These free nucleotides are added to an exposed 3'-hydroxyl group on the last incorporated nucleotide. In this reaction, a pyrophosphate is released from the free dNTP, generating energy for the polymerization reaction and exposing the 5' monophosphate, which is then covalently bonded to the 3' oxygen. Additionally, incorrectly inserted nucleotides can be removed and replaced by the correct nucleotides in an energetically favorable reaction. This property is vital to proper proofreading and repair of errors that occur during DNA replication. Replication fork The replication fork is the junction between the newly separated template strands, known as the leading and lagging strands, and the double stranded DNA. Since duplex DNA is antiparallel, DNA replication occurs in opposite directions between the two new strands at the replication fork, but all DNA polymerases synthesize DNA in the 5' to 3' direction with respect to the newly synthesized strand. Further coordination is required during DNA replication. Two replicative polymerases synthesize DNA in opposite orientations. Polymerase ε synthesizes DNA on the "leading" DNA strand continuously as it is pointing in the same direction as DNA unwinding by the replisome. In contrast, polymerase δ synthesizes DNA on the "lagging" strand, which is the opposite DNA template strand, in a fragmented or discontinuous manner. The discontinuous stretches of DNA replication products on the lagging strand are known as Okazaki fragments and are about 100 to 200 bases in length at eukaryotic replication forks. The lagging strand usually contains longer stretches of single-stranded DNA that is coated with single-stranded binding proteins, which help stabilize the single-stranded templates by preventing a secondary structure formation. In eukaryotes, these single-stranded binding proteins are a heterotrimeric complex known as replication protein A (RPA). Each Okazaki fragment is preceded by an RNA primer, which is displaced by the procession of the next Okazaki fragment during synthesis. RNase H recognizes the DNA:RNA hybrids that are created by the use of RNA primers and is responsible for removing these from the replicated strand, leaving behind a primer:template junction. DNA polymerase α, recognizes these sites and elongates the breaks left by primer removal. In eukaryotic cells, a small amount of the DNA segment immediately upstream of the RNA primer is also displaced, creating a flap structure. This flap is then cleaved by endonucleases. At the replication fork, the gap in DNA after removal of the flap is sealed by DNA ligase I, which repairs the nicks that are left between the 3'-OH and 5'phosphate of the newly synthesized strand. Owing to the relatively short nature of the eukaryotic Okazaki fragment, DNA replication synthesis occurring discontinuously on the lagging strand is less efficient and more time-consuming than leading-strand synthesis. DNA synthesis is complete once all RNA primers are removed and nicks are repaired. Leading strand During DNA replication, the replisome will unwind the parental duplex DNA into a two single-stranded DNA template replication fork in a 5' to 3' direction. The leading strand is the template strand that is being replicated in the same direction as the movement of the replication fork. This allows the newly synthesized strand complementary to the original strand to be synthesized 5' to 3' in the same direction as the movement of the replication fork. Once an RNA primer has been added by a primase to the 3' end of the leading strand, DNA synthesis will continue in a 3' to 5' direction with respect to the leading strand uninterrupted. DNA Polymerase ε will continuously add nucleotides to the template strand therefore making leading strand synthesis require only one primer and has uninterrupted DNA polymerase activity. Lagging strand DNA replication on the lagging strand is discontinuous. In lagging strand synthesis, the movement of DNA polymerase in the opposite direction of the replication fork requires the use of multiple RNA primers. DNA polymerase will synthesize short fragments of DNA called Okazaki fragments which are added to the 3' end of the primer. These fragments can be anywhere between 100 and 400 nucleotides long in eukaryotes. At the end of Okazaki fragment synthesis, DNA polymerase δ runs into the previous Okazaki fragment and displaces its 5' end containing the RNA primer and a small segment of DNA. This generates an RNA-DNA single strand flap, which must be cleaved, and the nick between the two Okazaki fragments must be sealed by DNA ligase I. This process is known as Okazaki fragment maturation and can be handled in two ways: one mechanism processes short flaps, while the other deals with long flaps. DNA polymerase δ is able to displace up to 2 to 3 nucleotides of DNA or RNA ahead of its polymerization, generating a short "flap" substrate for Fen1, which can remove nucleotides from the flap, one nucleotide at a time. By repeating cycles of this process, DNA polymerase δ and Fen1 can coordinate the removal of RNA primers and leave a DNA nick at the lagging strand. It has been proposed that this iterative process is preferable to the cell because it is tightly regulated and does not generate large flaps that need to be excised. In the event of deregulated Fen1/DNA polymerase δ activity, the cell uses an alternative mechanism to generate and process long flaps by using Dna2, which has both helicase and nuclease activities. The nuclease activity of Dna2 is required for removing these long flaps, leaving a shorter flap to be processed by Fen1. Electron microscopy studies indicate that nucleosome loading on the lagging strand occurs very close to the site of synthesis. Thus, Okazaki fragment maturation is an efficient process that occurs immediately after the nascent DNA is synthesized. Replicative DNA polymerases After the replicative helicase has unwound the parental DNA duplex, exposing two single-stranded DNA templates, replicative polymerases are needed to generate two copies of the parental genome. DNA polymerase function is highly specialized and accomplish replication on specific templates and in narrow localizations. At the eukaryotic replication fork, there are three distinct replicative polymerase complexes that contribute to DNA replication: Polymerase α, Polymerase δ, and Polymerase ε. These three polymerases are essential for viability of the cell. Because DNA polymerases require a primer on which to begin DNA synthesis, polymerase α (Pol α) acts as a replicative primase. Pol α is associated with an RNA primase and this complex accomplishes the priming task by synthesizing a primer that contains a short 10 nucleotide stretch of RNA followed by 10 to 20 DNA bases. Importantly, this priming action occurs at replication initiation at origins to begin leading-strand synthesis and also at the 5' end of each Okazaki fragment on the lagging strand. However, Pol α is not able to continue DNA replication and must be replaced with another polymerase to continue DNA synthesis. Polymerase switching requires clamp loaders and it has been proven that normal DNA replication requires the coordinated actions of all three DNA polymerases: Pol α for priming synthesis, Pol ε for leading-strand replication, and the Pol δ, which is constantly loaded, for generating Okazaki fragments during lagging-strand synthesis. Polymerase α (Pol α): Forms a complex with a small catalytic subunit (PriS) and a large noncatalytic (PriL) subunit. First, synthesis of an RNA primer allows DNA synthesis by DNA polymerase alpha. Occurs once at the origin on the leading strand and at the start of each Okazaki fragment on the lagging strand. Pri subunits act as a primase, synthesizing an RNA primer. DNA Pol α elongates the newly formed primer with DNA nucleotides. After around 20 nucleotides, elongation is taken over by Pol ε on the leading strand and Pol δ on the lagging strand. Polymerase δ (Pol δ): Highly processive and has proofreading, 3'->5' exonuclease activity. In vivo, it is the main polymerase involved in both lagging strand and leading strand synthesis. Polymerase ε (Pol ε): Highly processive and has proofreading, 3'->5' exonuclease activity. Highly related to pol δ, in vivo it functions mainly in error checking of pol δ. Cdc45–Mcm–GINS helicase complex The DNA helicases and polymerases must remain in close contact at the replication fork. If unwinding occurs too far in advance of synthesis, large tracts of single-stranded DNA are exposed. This can activate DNA damage signaling or induce DNA repair processes. To thwart these problems, the eukaryotic replisome contains specialized proteins that are designed to regulate the helicase activity ahead of the replication fork. These proteins also provide docking sites for physical interaction between helicases and polymerases, thereby ensuring that duplex unwinding is coupled with DNA synthesis. For DNA polymerases to function, the double-stranded DNA helix has to be unwound to expose two single-stranded DNA templates for replication. DNA helicases are responsible for unwinding the double-stranded DNA during chromosome replication. Helicases in eukaryotic cells are remarkably complex. The catalytic core of the helicase is composed of six minichromosome maintenance (Mcm2-7) proteins, forming a hexameric ring. Away from DNA, the Mcm2-7 proteins form a single heterohexamer and are loaded in an inactive form at origins of DNA replication as a head-to-head double hexamers around double-stranded DNA. The Mcm proteins are recruited to replication origins then redistributed throughout the genomic DNA during S phase, indicative of their localization to the replication fork. Loading of Mcm proteins can only occur during the G1 of the cell cycle, and the loaded complex is then activated during S phase by recruitment of the Cdc45 protein and the GINS complex to form the active Cdc45–Mcm–GINS (CMG) helicase at DNA replication forks. Mcm activity is required throughout the S phase for DNA replication. A variety of regulatory factors assemble around the CMG helicase to produce the ‘Replisome Progression Complex’ which associates with DNA polymerases to form the eukaryotic replisome, the structure of which is still quite poorly defined in comparison with its bacterial counterpart. The isolated CMG helicase and Replisome Progression Complex contain a single Mcm protein ring complex suggesting that the loaded double hexamer of the Mcm proteins at origins might be broken into two single hexameric rings as part of the initiation process, with each Mcm protein complex ring forming the core of a CMG helicase at the two replication forks established from each origin. The full CMG complex is required for DNA unwinding, and the complex of CDC45-Mcm-GINS is the functional DNA helicase in eukaryotic cells. Ctf4 and And1 proteins The CMG complex interacts with the replisome through the interaction with Ctf4 and And1 proteins. Ctf4/And1 proteins interact with both the CMG complex and DNA polymerase α. Ctf4 is a polymerase α accessory factor, which is required for the recruitment of polymerase α to replication origins. Mrc1 and Claspin proteins Mrc1/Claspin proteins couple leading-strand synthesis with the CMG complex helicase activity. Mrc1 interacts with polymerase ε as well as Mcm proteins. The importance of this direct link between the helicase and the leading-strand polymerase is underscored by results in cultured human cells, where Mrc1/Claspin is required for efficient replication fork progression. These results suggest that efficient DNA replication also requires the coupling of helicases and leading-strand synthesis... Proliferating cell nuclear antigen DNA polymerases require additional factors to support DNA replication. DNA polymerases have a semiclosed 'hand' structure, which allows the polymerase to load onto the DNA and begin translocating. This structure permits DNA polymerase to hold the single-stranded DNA template, incorporate dNTPs at the active site, and release the newly formed double-stranded DNA. However, the structure of DNA polymerases does not allow a continuous stable interaction with the template DNA. To strengthen the interaction between the polymerase and the template DNA, DNA sliding clamps associate with the polymerase to promote the processivity of the replicative polymerase. In eukaryotes, the sliding clamp is a homotrimer ring structure known as the proliferating cell nuclear antigen (PCNA). The PCNA ring has polarity with surfaces that interact with DNA polymerases and tethers them securely to the DNA template. PCNA-dependent stabilization of DNA polymerases has a significant effect on DNA replication because PCNAs are able to enhance the polymerase processivity up to 1,000-fold. PCNA is an essential cofactor and has the distinction of being one of the most common interaction platforms in the replisome to accommodate multiple processes at the replication fork, and so PCNA is also viewed as a regulatory cofactor for DNA polymerases. Replication factor C PCNA fully encircles the DNA template strand and must be loaded onto DNA at the replication fork. At the leading strand, loading of the PCNA is an infrequent process, because DNA replication on the leading strand is continuous until replication is terminated. However, at the lagging strand, DNA polymerase δ needs to be continually loaded at the start of each Okazaki fragment. This constant initiation of Okazaki fragment synthesis requires repeated PCNA loading for efficient DNA replication. PCNA loading is accomplished by the replication factor C (RFC) complex. The RFC complex is composed of five ATPases: Rfc1, Rfc2, Rfc3, Rfc4 and Rfc5. RFC recognizes primer-template junctions and loads PCNA at these sites. The PCNA homotrimer is opened by RFC by ATP hydrolysis and is then loaded onto DNA in the proper orientation to facilitate its association with the polymerase. Clamp loaders can also unload PCNA from DNA; a mechanism needed when replication must be terminated. Stalled replication fork DNA replication at the replication fork can be halted by a shortage of deoxynucleotide triphosphates (dNTPs) or by DNA damage, resulting in replication stress. This halting of replication is described as a stalled replication fork. A fork protection complex of proteins stabilizes the replication fork until DNA damage or other replication problems can be fixed. Prolonged replication fork stalling can lead to further DNA damage. Stalling signals are deactivated if the problems causing the replication fork are resolved. Termination Termination of eukaryotic DNA replication requires different processes depending on whether the chromosomes are circular or linear. Unlike linear molecules, circular chromosomes are able to replicate the entire molecule. However, the two DNA molecules will remain linked together. This issue is handled by decatenation of the two DNA molecules by a type II topoisomerase. Type II topoisomerases are also used to separate linear strands as they are intricately folded into a nucleosome within the cell. As previously mentioned, linear chromosomes face another issue that is not seen in circular DNA replication. Due to the fact that an RNA primer is required for initiation of DNA synthesis, the lagging strand is at a disadvantage in replicating the entire chromosome. While the leading strand can use a single RNA primer to extend the 5' terminus of the replicating DNA strand, multiple RNA primers are responsible for lagging strand synthesis, creating Okazaki fragments. This leads to an issue due to the fact that DNA polymerase is only able to add to the 3' end of the DNA strand. The 3'-5' action of DNA polymerase along the parent strand leaves a short single-stranded DNA (ssDNA) region at the 3' end of the parent strand when the Okazaki fragments have been repaired. Since replication occurs in opposite directions at opposite ends of parent chromosomes, each strand is a lagging strand at one end. Over time this would result in progressive shortening of both daughter chromosomes. This is known as the end replication problem. The end replication problem is handled in eukaryotic cells by telomere regions and telomerase. Telomeres extend the 3' end of the parental chromosome beyond the 5' end of the daughter strand. This single-stranded DNA structure can act as an origin of replication that recruits telomerase. Telomerase is a specialized DNA polymerase that consists of multiple protein subunits and an RNA component. The RNA component of telomerase anneals to the single stranded 3' end of the template DNA and contains 1.5 copies of the telomeric sequence. Telomerase contains a protein subunit that is a reverse transcriptase called telomerase reverse transcriptase or TERT. TERT synthesizes DNA until the end of the template telomerase RNA and then disengages. This process can be repeated as many times as needed with the extension of the 3' end of the parental DNA molecule. This 3' addition provides a template for extension of the 5' end of the daughter strand by lagging strand DNA synthesis. Regulation of telomerase activity is handled by telomere-binding proteins. Replication fork barriers Prokaryotic DNA replication is bidirectional; within a replicative origin, replisome complexes are created at each end of the replication origin and replisomes move away from each other from the initial starting point. In prokaryotes, bidirectional replication initiates at one replicative origin on the circular chromosome and terminates at a site opposed from the initial start of the origin. These termination regions have DNA sequences known as Ter sites. These Ter sites are bound by the Tus protein. The Ter-Tus complex is able to stop helicase activity, terminating replication. In eukaryotic cells, termination of replication usually occurs through the collision of the two replicative forks between two active replication origins. The location of the collision varies on the timing of origin firing. In this way, if a replication fork becomes stalled or collapses at a certain site, replication of the site can be rescued when a replisome traveling in the opposite direction completes copying the region. There are programmed replication fork barriers (RFBs) bound by RFB proteins in various locations, throughout the genome, which are able to terminate or pause replication forks, stopping progression of the replisome. Replication factories It has been found that replication happens in a localised way in the cell nucleus. Contrary to the traditional view of moving replication forks along stagnant DNA, a concept of replication factories emerged, which means replication forks are concentrated towards some immobilised 'factory' regions through which the template DNA strands pass like conveyor belts. Cell cycle regulation DNA replication is a tightly orchestrated process that is controlled within the context of the cell cycle. Progress through the cell cycle and in turn DNA replication is tightly regulated by the formation and activation of pre-replicative complexes (pre-RCs) which is achieved through the activation and inactivation of cyclin-dependent kinases (Cdks, CDKs). Specifically it is the interactions of cyclins and cyclin dependent kinases that are responsible for the transition from G1 into S-phase. During the G1 phase of the cell cycle there are low levels of CDK activity. This low level of CDK activity allows for the formation of new pre-RC complexes but is not sufficient for DNA replication to be initiated by the newly formed pre-RCs. During the remaining phases of the cell cycle there are elevated levels of CDK activity. This high level of CDK activity is responsible for initiating DNA replication as well as inhibiting new pre-RC complex formation. Once DNA replication has been initiated the pre-RC complex is broken down. Due to the fact that CDK levels remain high during the S phase, G2, and M phases of the cell cycle no new pre-RC complexes can be formed. This all helps to ensure that no initiation can occur until the cell division is complete. In addition to cyclin dependent kinases a new round of replication is thought to be prevented through the downregulation of Cdt1. This is achieved via degradation of Cdt1 as well as through the inhibitory actions of a protein known as geminin. Geminin binds tightly to Cdt1 and is thought to be the major inhibitor of re-replication. Geminin first appears in S-phase and is degraded at the metaphase-anaphase transition, possibly through ubiquination by anaphase promoting complex (APC). Various cell cycle checkpoints are present throughout the course of the cell cycle that determine whether a cell will progress through division entirely. Importantly in replication the G1, or restriction, checkpoint makes the determination of whether or not initiation of replication will begin or whether the cell will be placed in a resting stage known as G0. Cells in the G0 stage of the cell cycle are prevented from initiating a round of replication because the minichromosome maintenance proteins are not expressed. Transition into the S-phase indicates replication has begun. Replication checkpoint proteins In order to preserve genetic information during cell division, DNA replication must be completed with high fidelity. In order to achieve this task, eukaryotic cells have proteins in place during certain points in the replication process that are able to detect any errors during DNA replication and are able to preserve genomic integrity. These checkpoint proteins are able to stop the cell cycle from entering mitosis in order to allow time for DNA repair. Checkpoint proteins are also involved in some DNA repair pathways, while they stabilize the structure of the replication fork to prevent further damage. These checkpoint proteins are essential to avoid passing down mutations or other chromosomal aberrations to offspring. Eukaryotic checkpoint proteins are well conserved and involve two phosphatidylinositol 3-kinase-related kinases (PIKKs), ATR and ATM. Both ATR and ATM share a target phosphorylation sequence, the SQ/TQ motif, but their individual roles in cells differ. ATR is involved in arresting the cell cycle in response to DNA double-stranded breaks. ATR has an obligate checkpoint partner, ATR-interacting-protein (ATRIP), and together these two proteins are responsive to stretches of single-stranded DNA that are coated by replication protein A (RPA). The formation of single-stranded DNA occurs frequently, more often during replication stress. ATR-ATRIP is able to arrest the cell cycle to preserve genome integrity. ATR is found on chromatin during S phase, similar to RPA and claspin. The generation of single-stranded DNA tracts is important in initiating the checkpoint pathways downstream of replication damage. Once single-stranded DNA becomes sufficiently long, single-stranded DNA coated with RPA are able to recruit ATR-ATRIP. In order to become fully active, the ATR kinase rely on sensor proteins that sense whether the checkpoint proteins are localized to a valid site of DNA replication stress. The RAD9-HUS1-Rad1 (9-1-1) heterotrimeric clamp and its clamp loader RFCRad17 are able to recognize gapped or nicked DNA. The RFCRad17 clamp loader loads 9-1-1 onto the damaged DNA. The presence of 9-1-1 on DNA is enough to facilitate the interaction between ATR-ATRIP and a group of proteins termed checkpoint mediators, such as TOPBP1 and Mrc1/claspin. TOPBP1 interacts with and recruits the phosphorylated Rad9 component of 9-1-1 and binds ATR-ATRIP, which phosphorylates Chk1. Mrc1/Claspin is also required for the complete activation of ATR-ATRIP that phosphorylates Chk1, the major downstream checkpoint effector kinase. Claspin is a component of the replisome and contains a domain for docking with Chk1, revealing a specific function of Claspin during DNA replication: the promotion of checkpoint signaling at the replisome. Chk1 signaling is vital for arresting the cell cycle and preventing cells from entering mitosis with incomplete DNA replication or DNA damage. The Chk1-dependent Cdk inhibition is important for the function of the ATR-Chk1 checkpoint and to arrest the cell cycle and allow sufficient time for completion of DNA repair mechanisms, which in turn prevents the inheritance of damaged DNA. In addition, Chk1-dependent Cdk inhibition plays a critical role in inhibiting origin firing during S phase. This mechanism prevents continued DNA synthesis and is required for the protection of the genome in the presence of replication stress and potential genotoxic conditions. Thus, ATR-Chk1 activity further prevents potential replication problems at the level of single replication origins by inhibiting initiation of replication throughout the genome, until the signaling cascade maintaining cell-cycle arrest is turned off. Replication through nucleosomes Eukaryotic DNA must be tightly compacted in order to fit within the confined space of the nucleus. Chromosomes are packaged by wrapping 147 nucleotides around an octamer of histone proteins, forming a nucleosome. The nucleosome octamer includes two copies of each histone H2A, H2B, H3, and H4. Due to the tight association of histone proteins to DNA, eukaryotic cells have proteins that are designed to remodel histones ahead of the replication fork, in order to allow smooth progression of the replisome. There are also proteins involved in reassembling histones behind the replication fork to reestablish the nucleosome conformation. There are several histone chaperones that are known to be involved in nucleosome assembly after replication. The FACT complex has been found to interact with DNA polymerase α-primase complex, and the subunits of the FACT complex interacted genetically with replication factors. The FACT complex is a heterodimer that does not hydrolyze ATP, but is able to facilitate "loosening" of histones in nucleosomes, but how the FACT complex is able to relieve the tight association of histones for DNA removal remains unanswered. Another histone chaperone that associates with the replisome is Asf1, which interacts with the Mcm complex dependent on histone dimers H3-H4. Asf1 is able to pass newly synthesized H3-H4 dimer to deposition factors behind the replication fork and this activity makes the H3-H4 histone dimers available at the site of histone deposition just after replication. Asf1 (and its partner Rtt109) has also been implicated in inhibiting gene expression from replicated genes during S-phase. The heterotrimeric chaperone chromatin assembly factor 1 (CAF-1) is a chromatin formation protein that is involved in depositing histones onto both newly replicated DNA strands to form chromatin. CAF-1 contains a PCNA-binding motif, called a PIP-box, that allows CAF-1 to associate with the replisome through PCNA and is able to deposit histone H3-H4 dimers onto newly synthesized DNA. The Rtt106 chaperone is also involved in this process, and associated with CAF-1 and H3-H4 dimers during chromatin formation. These processes load newly synthesized histones onto DNA. After the deposition of histones H3-H4, nucleosomes form by the association of histone H2A-H2B. This process is thought to occur through the FACT complex, since it already associated with the replisome and is able to bind free H2A-H2B, or there is the possibility of another H2A-H2B chaperone, Nap1. Electron microscopy studies show that this occurs very quickly, as nucleosomes can be observed forming just a few hundred base pairs after the replication fork. Therefore, the entire process of forming new nucleosomes takes place just after replication due to the coupling of histone chaperones to the replisome. Mitotic DNA Synthesis Mitotic DNA synthesis (MiDAS) is a process of irregular DNA replication where DNA synthesis, naturally occurring in the S phase, takes place in the M phase of the cell cycle. Mitotic DNA synthesis is known to occur when cells are experiencing stress related to DNA replication. Certain loci in the genome, considered common fragile sites (CFS) or ALT-associated replication defects can induce replication stress that may lead to MiDAS. Mitotic DNA synthesis is enabled by a protein known as RAD52, which then recruits enzymes, including MUS81 and POLD3. These enzymes work to promote MiDAS, operating outside of ATR, BRCA2, and RAD51 which are necessary to prevent replication stress at CFS loci throughout S phase. MiDAS has been recorded in mammals and yeast, however, its occurrence in other eukaryotic organisms is yet to be discovered. Comparisons between prokaryotic and eukaryotic DNA replication When compared to prokaryotic DNA replication, namely in bacteria, the completion of eukaryotic DNA replication is more complex and involves multiple origins of replication and replicative proteins to accomplish. Prokaryotic DNA is arranged in a circular shape, and has only one replication origin when replication starts. By contrast, eukaryotic DNA is linear. When replicated, there are as many as one thousand origins of replication. Eukaryotic DNA is bidirectional. Here the meaning of the word bidirectional is different. Eukaryotic linear DNA has many origins (called O) and termini (called T). "T" is present to the right of "O". One "O" and one "T" together form one replicon. After the formation of pre-initiation complex, when one replicon starts elongation, initiation starts in second replicon. Now, if the first replicon moves in clockwise direction, the second replicon moves in anticlockwise direction, until "T" of first replicon is reached. At "T", both the replicons merge to complete the process of replication. Meanwhile, the second replicon is moving in forward direction also, to meet with the third replicon. This clockwise and counter-clockwise movement of two replicons is termed as bidirectional replication. Eukaryotic DNA replication requires precise coordination of all DNA polymerases and associated proteins to replicate the entire genome each time a cell divides. This process is achieved through a series of steps of protein assemblies at origins of replication, mainly focusing the regulation of DNA replication on the association of the MCM helicase with the DNA. These origins of replication direct the number of protein complexes that will form to initiate replication. In bacterial DNA replication, regulation focuses on the binding of the DnaA initiator protein to the DNA, with initiation of replication occurring multiple times during one cell cycle. Both prokaryotic and eukaryotic DNA use ATP binding and hydrolysis to direct helicase loading and in both cases the helicase is loaded in the inactive form. However, eukaryotic helicases are double hexamers that are loaded onto double stranded DNA whereas bacterial helicases are single hexamers loaded onto single stranded DNA. Segregation of chromosomes is another difference between prokaryotic and eukaryotic cells. Rapidly dividing cells, such as bacteria, will often begin to segregate chromosomes that are still in the process of replication. In eukaryotic cells chromosome segregation into the daughter cells is not initiated until replication is complete in all chromosomes. Despite these differences, however, the underlying process of replication is similar for both prokaryotic and eukaryotic DNA. Eukaryotic DNA replication protein list List of major proteins involved in eukaryotic DNA replication: See also DNA replication Prokaryotic DNA replication Processivity References DNA replication
Eukaryotic DNA replication
[ "Biology" ]
11,973
[ "Genetics techniques", "DNA replication", "Molecular genetics" ]
9,902,787
https://en.wikipedia.org/wiki/Structure%20theorem%20for%20finitely%20generated%20modules%20over%20a%20principal%20ideal%20domain
In mathematics, in the field of abstract algebra, the structure theorem for finitely generated modules over a principal ideal domain is a generalization of the fundamental theorem of finitely generated abelian groups and roughly states that finitely generated modules over a principal ideal domain (PID) can be uniquely decomposed in much the same way that integers have a prime factorization. The result provides a simple framework to understand various canonical form results for square matrices over fields. Statement When a vector space over a field F has a finite generating set, then one may extract from it a basis consisting of a finite number n of vectors, and the space is therefore isomorphic to Fn. The corresponding statement with F generalized to a principal ideal domain R is no longer true, since a basis for a finitely generated module over R might not exist. However such a module is still isomorphic to a quotient of some module Rn with n finite (to see this it suffices to construct the morphism that sends the elements of the canonical basis of Rn to the generators of the module, and take the quotient by its kernel.) By changing the choice of generating set, one can in fact describe the module as the quotient of some Rn by a particularly simple submodule, and this is the structure theorem. The structure theorem for finitely generated modules over a principal ideal domain usually appears in the following two forms. Invariant factor decomposition For every finitely generated module over a principal ideal domain , there is a unique decreasing sequence of proper ideals such that is isomorphic to the sum of cyclic modules: The generators of the ideals are unique up to multiplication by a unit, and are called invariant factors of M. Since the ideals should be proper, these factors must not themselves be invertible (this avoids trivial factors in the sum), and the inclusion of the ideals means one has divisibility . The free part is visible in the part of the decomposition corresponding to factors . Such factors, if any, occur at the end of the sequence. While the direct sum is uniquely determined by , the isomorphism giving the decomposition itself is not unique in general. For instance if is actually a field, then all occurring ideals must be zero, and one obtains the decomposition of a finite dimensional vector space into a direct sum of one-dimensional subspaces; the number of such factors is fixed, namely the dimension of the space, but there is a lot of freedom for choosing the subspaces themselves (if ). The nonzero elements, together with the number of which are zero, form a complete set of invariants for the module. Explicitly, this means that any two modules sharing the same set of invariants are necessarily isomorphic. Some prefer to write the free part of M separately: where the visible are nonzero, and f is the number of 's in the original sequence which are 0. Primary decomposition Every finitely generated module M over a principal ideal domain R is isomorphic to one of the form where and the are primary ideals. The are unique (up to multiplication by units). The elements are called the elementary divisors of M. In a PID, nonzero primary ideals are powers of primes, and so . When , the resulting indecomposable module is itself, and this is inside the part of M that is a free module. The summands are indecomposable, so the primary decomposition is a decomposition into indecomposable modules, and thus every finitely generated module over a PID is a completely decomposable module. Since PID's are Noetherian rings, this can be seen as a manifestation of the Lasker-Noether theorem. As before, it is possible to write the free part (where ) separately and express M as where the visible are nonzero. Proofs One proof proceeds as follows: Every finitely generated module over a PID is also finitely presented because a PID is Noetherian, an even stronger condition than coherence. Take a presentation, which is a map (relations to generators), and put it in Smith normal form. This yields the invariant factor decomposition, and the diagonal entries of Smith normal form are the invariant factors. Another outline of a proof: Denote by tM the torsion submodule of M. Torsion module can be embedded as a submodule of M and this gives short exact sequence: Where the map is a projection. M/tM is a finitely generated torsion free module, and such a module over a commutative PID is a free module of finite rank, so it is isomorphic to: for a positive integer n. Since every free module is projective module, then exists right inverse of the projection map (it suffices to lift each of the generators of M/tM into M). By splitting lemma (left split) M splits into: . For a prime element p in R we can then speak of . This is a submodule of tM, and it turns out that each Np is a direct sum of cyclic modules, and that tM is a direct sum of Np for a finite number of distinct primes p. Putting the previous two steps together, M is decomposed into cyclic modules of the indicated types. Corollaries This includes the classification of finite-dimensional vector spaces as a special case, where . Since fields have no non-trivial ideals, every finitely generated vector space is free. Taking yields the fundamental theorem of finitely generated abelian groups. Let T be a linear operator on a finite-dimensional vector space V over K. Taking , the algebra of polynomials with coefficients in K evaluated at T, yields structure information about T. V can be viewed as a finitely generated module over . The last invariant factor is the minimal polynomial, and the product of invariant factors is the characteristic polynomial. Combined with a standard matrix form for , this yields various canonical forms: invariant factors + companion matrix yields Frobenius normal form (aka, rational canonical form) primary decomposition + companion matrix yields primary rational canonical form primary decomposition + Jordan blocks yields Jordan canonical form (this latter only holds over an algebraically closed field) Uniqueness While the invariants (rank, invariant factors, and elementary divisors) are unique, the isomorphism between M and its canonical form is not unique, and does not even preserve the direct sum decomposition. This follows because there are non-trivial automorphisms of these modules which do not preserve the summands. However, one has a canonical torsion submodule T, and similar canonical submodules corresponding to each (distinct) invariant factor, which yield a canonical sequence: Compare composition series in Jordan–Hölder theorem. For instance, if , and is one basis, then is another basis, and the change of basis matrix does not preserve the summand . However, it does preserve the summand, as this is the torsion submodule (equivalently here, the 2-torsion elements). Generalizations Groups The Jordan–Hölder theorem is a more general result for finite groups (or modules over an arbitrary ring). In this generality, one obtains a composition series, rather than a direct sum. The Krull–Schmidt theorem and related results give conditions under which a module has something like a primary decomposition, a decomposition as a direct sum of indecomposable modules in which the summands are unique up to order. Primary decomposition The primary decomposition generalizes to finitely generated modules over commutative Noetherian rings, and this result is called the Lasker–Noether theorem. Indecomposable modules By contrast, unique decomposition into indecomposable submodules does not generalize as far, and the failure is measured by the ideal class group, which vanishes for PIDs. For rings that are not principal ideal domains, unique decomposition need not even hold for modules over a ring generated by two elements. For the ring R = Z[√−5], both the module R and its submodule M generated by 2 and 1 + √−5 are indecomposable. While R is not isomorphic to M, R ⊕ R is isomorphic to M ⊕ M; thus the images of the M summands give indecomposable submodules L1, L2 < R ⊕ R which give a different decomposition of R ⊕ R. The failure of uniquely factorizing R ⊕ R into a direct sum of indecomposable modules is directly related (via the ideal class group) to the failure of the unique factorization of elements of R into irreducible elements of R. However, over a Dedekind domain the ideal class group is the only obstruction, and the structure theorem generalizes to finitely generated modules over a Dedekind domain with minor modifications. There is still a unique torsion part, with a torsionfree complement (unique up to isomorphism), but a torsionfree module over a Dedekind domain is no longer necessarily free. Torsionfree modules over a Dedekind domain are determined (up to isomorphism) by rank and Steinitz class (which takes value in the ideal class group), and the decomposition into a direct sum of copies of R (rank one free modules) is replaced by a direct sum into rank one projective modules: the individual summands are not uniquely determined, but the Steinitz class (of the sum) is. Non-finitely generated modules Similarly for modules that are not finitely generated, one cannot expect such a nice decomposition: even the number of factors may vary. There are Z-submodules of Q4 which are simultaneously direct sums of two indecomposable modules and direct sums of three indecomposable modules, showing the analogue of the primary decomposition cannot hold for infinitely generated modules, even over the integers, Z. Another issue that arises with non-finitely generated modules is that there are torsion-free modules which are not free. For instance, consider the ring Z of integers. Then Q is a torsion-free Z-module which is not free. Another classical example of such a module is the Baer–Specker group, the group of all sequences of integers under termwise addition. In general, the question of which infinitely generated torsion-free abelian groups are free depends on which large cardinals exist. A consequence is that any structure theorem for infinitely generated modules depends on a choice of set theory axioms and may be invalid under a different choice. References Theorems in abstract algebra Module theory de:Hauptidealring#Moduln über Hauptidealringen
Structure theorem for finitely generated modules over a principal ideal domain
[ "Mathematics" ]
2,210
[ "Theorems in algebra", "Fields of abstract algebra", "Module theory", "Theorems in abstract algebra" ]
9,903,342
https://en.wikipedia.org/wiki/Norepinephrine
Norepinephrine (NE), also called noradrenaline (NA) or noradrenalin, is an organic chemical in the catecholamine family that functions in the brain and body as a hormone, neurotransmitter and neuromodulator. The name "noradrenaline" (from Latin ad, "near", and ren, "kidney") is more commonly used in the United Kingdom and the rest of the world, whereas "norepinephrine" (from Ancient Greek ἐπῐ́ (epí), "upon", and νεφρός (nephrós), "kidney") is usually preferred in the United States. "Norepinephrine" is also the international nonproprietary name given to the drug. Regardless of which name is used for the substance itself, parts of the body that produce or are affected by it are referred to as noradrenergic. The general function of norepinephrine is to mobilize the brain and body for action. Norepinephrine release is lowest during sleep, rises during wakefulness, and reaches much higher levels during situations of stress or danger, in the so-called fight-or-flight response. In the brain, norepinephrine increases arousal and alertness, promotes vigilance, enhances formation and retrieval of memory, and focuses attention; it also increases restlessness and anxiety. In the rest of the body, norepinephrine increases heart rate and blood pressure, triggers the release of glucose from energy stores, increases blood flow to skeletal muscle, reduces blood flow to the gastrointestinal system, and inhibits voiding of the bladder and gastrointestinal motility. In the brain, noradrenaline is produced in nuclei that are small yet exert powerful effects on other brain areas. The most important of these nuclei is the locus coeruleus, located in the pons. Outside the brain, norepinephrine is used as a neurotransmitter by sympathetic ganglia located near the spinal cord or in the abdomen, as well as Merkel cells located in the skin. It is also released directly into the bloodstream by the adrenal glands. Regardless of how and where it is released, norepinephrine acts on target cells by binding to and activating adrenergic receptors located on the cell surface. A variety of medically important drugs work by altering the actions of noradrenaline systems. Noradrenaline itself is widely used as an injectable drug for the treatment of critically low blood pressure. Stimulants often increase, enhance, or otherwise act as agonists of norepinephrine. Drugs such as cocaine and methylphenidate act as reuptake inhibitors of norepinephrine, as do some antidepressants, such as those in the SNRI class. One of the more notable drugs in the stimulant class is amphetamine, which acts as a dopamine and norepinephrine analog, reuptake inhibitor, as well as an agent that increases the amount of global catecholamine signaling throughout the nervous system by reversing transporters in the synapses. Beta blockers, which counter some of the effects of noradrenaline by blocking beta-adrenergic receptors, are sometimes used to treat glaucoma, migraines and a range of cardiovascular diseases. β1Rs preferentially bind epinephrine, along with norepinephrine to a lesser extent and mediates some of their cellular effects in cardiac myocytes such as increased positive inotropy and lusitropy. β-blockers exert their cardioprotective effects through decreasing oxygen demand in cardiac myocytes; this is accomplished via decreasing the force of contraction during systole (negative inotropy) and decreasing the rate of relaxation during diastole (negative lusitropy), thus reducing myocardial energy demand which is useful in treating cardiovascular disorders accompanied by inadequate myocardial oxygen supply. Alpha blockers, which counter the effects of noradrenaline on alpha-adrenergic receptors, are occasionally used to treat hypertension and psychiatric conditions. Alpha-2 agonists often have a sedating and antihypertensive effect and are commonly used as anesthesia enhancers in surgery, as well as in treatment of drug or alcohol dependence. For reasons that are still unclear, some Alpha-2 agonists, such as guanfacine, have also been shown to be effective in the treatment of anxiety disorders and ADHD. Many important psychiatric drugs exert strong effects on noradrenaline systems in the brain, resulting in effects that may be helpful or harmful. Structure Norepinephrine is a catecholamine and a phenethylamine. Its structure differs from that of epinephrine only in that epinephrine has a methyl group attached to its nitrogen, whereas the methyl group is replaced by a hydrogen atom in norepinephrine. The prefix nor- is derived as an abbreviation of the word "normal", used to indicate a demethylated compound. Norepinephrine consists of a catechol moiety (a benzene ring with two adjoining hydroxyl groups in the meta-para position), and an ethylamine side chain consisting of a hydroxyl group bonded in the benzylic position. Biochemical mechanisms Biosynthesis Norepinephrine is synthesized from the amino acid tyrosine by a series of enzymatic steps in the adrenal medulla and postganglionic neurons of the sympathetic nervous system, while the norepinephrine that functions as a neurotransmitter in the brain is produced in the locus coeruleus, located in the pons of the brainstem. While the conversion of tyrosine to dopamine occurs predominantly in the cytoplasm, the conversion of dopamine to norepinephrine by dopamine β-monooxygenase occurs predominantly inside neurotransmitter vesicles. The metabolic pathway is: Phenylalanine → Tyrosine → L-DOPA → Dopamine → Norepinephrine Thus the direct precursor of norepinephrine is dopamine, which is synthesized indirectly from the essential amino acid phenylalanine or the non-essential amino acid tyrosine. These amino acids are found in nearly every protein and, as such, are provided by ingestion of protein-containing food, with tyrosine being the most common. Phenylalanine is converted into tyrosine by the enzyme phenylalanine hydroxylase, with molecular oxygen (O2) and tetrahydrobiopterin as cofactors. Tyrosine is converted into L-DOPA by the enzyme tyrosine hydroxylase, with tetrahydrobiopterin, O2, and probably ferrous iron (Fe2+) as cofactors. Conversion of tyrosine to L-DOPA is inhibited by Metyrosine, a tyrosine analog. L-DOPA is converted into dopamine by the enzyme aromatic L-amino acid decarboxylase (also known as DOPA decarboxylase), with pyridoxal phosphate as a cofactor. Dopamine is then converted into norepinephrine by the enzyme dopamine β-monooxygenase (formerly known as dopamine β-hydroxylase), with O2 and ascorbic acid as cofactors. Norepinephrine itself can further be converted into epinephrine by the enzyme phenylethanolamine N-methyltransferase with S-adenosyl-L-methionine as cofactor. Degradation In mammals, norepinephrine is rapidly degraded to various metabolites. The initial step in the breakdown can be catalyzed by either of the enzymes monoamine oxidase (mainly monoamine oxidase A) or COMT. From there, the breakdown can proceed by a variety of pathways. The principal end products are either Vanillylmandelic acid or a conjugated form of MHPG, both of which are thought to be biologically inactive and are excreted in the urine. Functions Cellular effects Like many other biologically active substances, norepinephrine exerts its effects by binding to and activating receptors located on the surface of cells. Two broad families of norepinephrine receptors have been identified, known as alpha and beta-adrenergic receptors. Alpha receptors are divided into subtypes α1 and α2; beta receptors into subtypes β1, β2, and β3. All of these function as G protein-coupled receptors, meaning that they exert their effects via a complex second messenger system. Alpha-2 receptors usually have inhibitory effects, but many are located pre-synaptically (i.e., on the surface of the cells that release norepinephrine), so the net effect of alpha-2 activation is often a decrease in the amount of norepinephrine released. Alpha-1 receptors and all three types of beta receptors usually have excitatory effects. Storage, release, and reuptake Inside the brain norepinephrine functions as a neurotransmitter and neuromodulator, and is controlled by a set of mechanisms common to all monoamine neurotransmitters. After synthesis, norepinephrine is transported from the cytosol into synaptic vesicles by the vesicular monoamine transporter (VMAT). VMAT can be inhibited by Reserpine causing a decrease in neurotransmitter stores. Norepinephrine is stored in these vesicles until it is ejected into the synaptic cleft, typically after an action potential causes the vesicles to release their contents directly into the synaptic cleft through a process called exocytosis. Once in the synapse, norepinephrine binds to and activates receptors. After an action potential, the norepinephrine molecules quickly become unbound from their receptors. They are then absorbed back into the presynaptic cell, via reuptake mediated primarily by the norepinephrine transporter (NET). Once back in the cytosol, norepinephrine can either be broken down by monoamine oxidase or repackaged into vesicles by VMAT, making it available for future release. Sympathetic nervous system Norepinephrine is the main neurotransmitter used by the sympathetic nervous system, which consists of about two dozen sympathetic chain ganglia located next to the spinal cord, plus a set of prevertebral ganglia located in the chest and abdomen. These sympathetic ganglia are connected to numerous organs, including the eyes, salivary glands, heart, lungs, liver, gallbladder, stomach, intestines, kidneys, urinary bladder, reproductive organs, muscles, skin, and adrenal glands. Sympathetic activation of the adrenal glands causes the part called the adrenal medulla to release norepinephrine (as well as epinephrine) into the bloodstream, from which, functioning as a hormone, it gains further access to a wide variety of tissues. Broadly speaking, the effect of norepinephrine on each target organ is to modify its state in a way that makes it more conducive to active body movement, often at a cost of increased energy use and increased wear and tear. This can be contrasted with the acetylcholine-mediated effects of the parasympathetic nervous system, which modifies most of the same organs into a state more conducive to rest, recovery, and digestion of food, and usually less costly in terms of energy expenditure. The sympathetic effects of norepinephrine include: In the eyes, an increase in the production of tears, making the eyes more moist, and pupil dilation through contraction of the iris dilator. In the heart, an increase in the amount of blood pumped. In brown adipose tissue, an increase in calories burned to generate body heat (thermogenesis). Multiple effects on the immune system. The sympathetic nervous system is the primary path of interaction between the immune system and the brain, and several components receive sympathetic inputs, including the thymus, spleen, and lymph nodes. However, the effects are complex, with some immune processes activated while others are inhibited. In the arteries, constriction of blood vessels causes an increase in blood pressure. In the kidneys, release of renin and retention of sodium in the bloodstream. In the liver, an increase in production of glucose, either by glycogenolysis after a meal or by gluconeogenesis when food has not recently been consumed. Glucose is the body's main energy source in most conditions. In the pancreas, increased release of glucagon, a hormone whose main effect is to increase the production of glucose by the liver. In skeletal muscles, an increase in glucose uptake. In adipose tissue (i.e., fat cells), an increase in lipolysis, that is, conversion of fat to substances that can be used directly as energy sources by muscles and other tissues. In the stomach and intestines, a reduction in digestive activity. This results from a generally inhibitory effect of norepinephrine on the enteric nervous system, causing decreases in gastrointestinal mobility, blood flow, and secretion of digestive substances. Noradrenaline and ATP are sympathetic co-transmitters. It is found that the endocannabinoid anandamide and the cannabinoid WIN 55,212-2 can modify the overall response to sympathetic nerve stimulation, which indicates that prejunctional CB1 receptors mediate the sympatho-inhibitory action. Thus cannabinoids can inhibit both the noradrenergic and purinergic components of sympathetic neurotransmission. Central nervous system The noradrenergic neurons in the brain form a neurotransmitter system, that, when activated, exerts effects on large areas of the brain. The effects are manifested in alertness, arousal, and readiness for action. Noradrenergic neurons (i.e., neurons whose primary neurotransmitter is norepinephrine) are comparatively few in number, and their cell bodies are confined to a few relatively small brain areas, but they send projections to many other brain areas and exert powerful effects on their targets. These noradrenergic cell groups were first mapped in 1964 by Annica Dahlström and Kjell Fuxe, who assigned them labels starting with the letter "A" (for "aminergic"). In their scheme, areas A1 through A7 contain the neurotransmitter norepinephrine (A8 through A14 contain dopamine). Noradrenergic cell group A1 is located in the caudal ventrolateral part of the medulla, and plays a role in the control of body fluid metabolism. Noradrenergic cell group A2 is located in a brainstem area called the solitary nucleus; these cells have been implicated in a variety of responses, including control of food intake and responses to stress. Cell groups A5 and A7 project mainly to the spinal cord. The most important source of norepinephrine in the brain is the locus coeruleus, which contains noradrenergic cell group A6 and adjoins cell group A4. The locus coeruleus is quite small in absolute terms—in primates, it is estimated to contain around 15,000 neurons, less than one-millionth of the neurons in the brain—but it sends projections to every major part of the brain and also to the spinal cord. The level of activity in the locus coeruleus correlates broadly with vigilance and speed of reaction. LC activity is low during sleep and drops to virtually nothing during the REM (dreaming) state. It runs at a baseline level during wakefulness, but increases temporarily when a person is presented with any sort of stimulus that draws attention. Unpleasant stimuli such as pain, difficulty breathing, bladder distension, heat or cold generate larger increases. Extremely unpleasant states such as intense fear or intense pain are associated with very high levels of LC activity. Norepinephrine released by the locus coeruleus affects brain function in several ways. It enhances processing of sensory inputs, enhances attention, enhances formation and retrieval of both long-term and working memory, and enhances the ability of the brain to respond to inputs by changing the activity pattern in the prefrontal cortex and other areas. The control of arousal level is strong enough that drug-induced suppression of the LC has a powerful sedating effect. There is a great similarity between situations that activate the locus coeruleus in the brain and situations that activate the sympathetic nervous system in the periphery: the LC essentially mobilizes the brain for action while the sympathetic system mobilizes the body. It has been argued that this similarity arises because both are to a large degree controlled by the same brain structures, particularly a part of the brainstem called the nucleus gigantocellularis. Skin Norepinephrine is also produced by Merkel cells which are part of the somatosensory system. It activates the afferent sensory neuron. Pharmacology A large number of important drugs exert their effects by interacting with norepinephrine systems in the brain or body. Their uses include treatment of cardiovascular problems, shock, and a variety of psychiatric conditions. These drugs are divided into: sympathomimetic drugs which mimic or enhance at least some of the effects of norepinephrine released by the sympathetic nervous system; sympatholytic drugs, in contrast, block at least some of the effects. Both of these are large groups with diverse uses, depending on exactly which effects are enhanced or blocked. Norepinephrine itself is classified as a sympathomimetic drug: its effects when given by intravenous injection of increasing heart rate and force and constricting blood vessels make it very useful for treating medical emergencies that involve critically low blood pressure. Surviving Sepsis Campaign recommended norepinephrine as first line agent in treating septic shock which is unresponsive to fluid resuscitation, supplemented by vasopressin and epinephrine. Dopamine usage is restricted only to highly selected patients. Antagonists Beta blockers These are sympatholytic drugs that block the effects of beta adrenergic receptors while having little or no effect on alpha receptors. They are sometimes used to treat high blood pressure, atrial fibrillation, and congestive heart failure, but recent reviews have concluded that other types of drugs are usually superior for those purposes. Beta blockers may be a viable choice for other cardiovascular conditions, though, including angina and Marfan syndrome. They are also widely used to treat glaucoma, most commonly in the form of eyedrops. Because of their effects in reducing anxiety symptoms and tremor, they have sometimes been used by entertainers, public speakers, and athletes to reduce performance anxiety, although they are not medically approved for that purpose and are banned by the International Olympic Committee. However, the usefulness of beta blockers is limited by a range of serious side effects, including slowing of heart rate, a drop in blood pressure, asthma, and reactive hypoglycemia. The negative effects can be particularly severe in people with diabetes. Alpha blockers These are sympatholytic drugs that block the effects of adrenergic alpha receptors while having little or no effect on beta receptors. Drugs belonging to this group can have very different effects, however, depending on whether they primarily block alpha-1 receptors, alpha-2 receptors, or both. Alpha-2 receptors, as described elsewhere in this article, are frequently located on norepinephrine-releasing neurons themselves and have inhibitory effects on them; consequently, blockage of alpha-2 receptors usually results in an increase in norepinephrine release. Alpha-1 receptors are usually located on target cells and have excitatory effects on them; consequently, blockage of alpha-1 receptors usually results in blocking some of the effects of norepinephrine. Drugs such as phentolamine that act on both types of receptors can produce a complex combination of both effects. In most cases when the term "alpha blocker" is used without qualification, it refers to a selective alpha-1 antagonist. Selective alpha-1 blockers have a variety of uses. Since one of their effects is to inhibit the contraction of the smooth muscle in the prostate, they are often used to treat symptoms of benign prostatic hyperplasia. Alpha-blockers also likely help people pass their kidney stones. Their effects on the central nervous system make them useful for treating generalized anxiety disorder, panic disorder, and posttraumatic stress disorder. They may, however, have significant side effects, including a drop in blood pressure. Some antidepressants function partly as selective alpha-2 blockers, but the best-known drug in that class is yohimbine, which is extracted from the bark of the African yohimbe tree. Yohimbine acts as a male potency enhancer, but its usefulness for that purpose is limited by serious side-effects including anxiety and insomnia. Overdoses can cause a dangerous increase in blood pressure. Yohimbine is banned in many countries, but in the United States, because it is extracted from a plant rather than chemically synthesized, it is sold over the counter as a nutritional supplement. Alpha-2 agonists These are sympathomimetic drugs that activate alpha-2 receptors or enhance their effects. Because alpha-2 receptors are inhibitory and many are located presynaptically on norepinephrine-releasing cells, the net effect of these drugs is usually to reduce the amount of norepinephrine released. Drugs in this group that are capable of entering the brain often have strong sedating effects, due to their inhibitory effects on the locus coeruleus. Clonidine and guanfacine, for example, are used for the treatment of anxiety disorders and insomnia, and also as a sedative premedication for patients about to undergo surgery. Xylazine, another drug in this group, is also a powerful sedative and is often used in combination with ketamine as a general anaesthetic for veterinary surgery—in the United States it has not been approved for use in humans. Stimulants and antidepressants These are drugs whose primary effects are thought to be mediated by different neurotransmitter systems (dopamine for stimulants, serotonin for antidepressants), but many also increase levels of norepinephrine in the brain. Amphetamine, for example, is a stimulant that increases release of norepinephrine as well as dopamine. Monoamine oxidase A inhibitors (MAO-A) are antidepressants that inhibit the metabolic degradation of norepinephrine as well as serotonin and dopamine. In some cases it is difficult to distinguish the norepinephrine-mediated effects from the effects related to other neurotransmitters. Diseases and disorders A number of important medical problems involve dysfunction of the norepinephrine system in the brain or body. Sympathetic hyperactivation Hyperactivation of the sympathetic nervous system is not a recognized condition in itself, but it is a component of a number of conditions, as well as a possible consequence of taking sympathomimetic drugs. It causes a distinctive set of symptoms including aches and pains, rapid heartbeat, elevated blood pressure, sweating, palpitations, anxiety, headache, paleness, and a drop in blood glucose. If sympathetic activity is elevated for an extended time, it can cause weight loss and other stress-related body changes. The list of conditions that can cause sympathetic hyperactivation includes severe brain injury, spinal cord damage, heart failure, high blood pressure, kidney disease, and various types of stress. Pheochromocytoma A pheochromocytoma is a rarely occurring tumor of the adrenal medulla, caused either by genetic factors or certain types of cancer. The consequence is a massive increase in the amount of norepinephrine and epinephrine released into the bloodstream. The most obvious symptoms are those of sympathetic hyperactivation, including particularly a rise in blood pressure that can reach fatal levels. The most effective treatment is surgical removal of the tumor. Stress Stress, to a physiologist, means any situation that threatens the continued stability of the body and its functions. Stress affects a wide variety of body systems: the two most consistently activated are the hypothalamic-pituitary-adrenal axis and the norepinephrine system, including both the sympathetic nervous system and the locus coeruleus-centered system in the brain. Stressors of many types evoke increases in noradrenergic activity, which mobilizes the brain and body to meet the threat. Chronic stress, if continued for a long time, can damage many parts of the body. A significant part of the damage is due to the effects of sustained norepinephrine release, because of norepinephrine's general function of directing resources away from maintenance, regeneration, and reproduction, and toward systems that are required for active movement. The consequences can include slowing of growth (in children), sleeplessness, loss of libido, gastrointestinal problems, impaired disease resistance, slower rates of injury healing, depression, and increased vulnerability to addiction. ADHD Attention deficit hyperactivity disorder is a neurodevelopmental condition involving problems with attention, hyperactivity, and impulsiveness. It is most commonly treated using stimulant drugs such as methylphenidate (Ritalin), whose primary effect is to increase dopamine levels in the brain, but drugs in this group also generally increase brain levels of norepinephrine, and it has been difficult to determine whether these actions are involved in their clinical value. There is also substantial evidence that many people with ADHD show biomarkers involving altered norepinephrine processing. Several drugs whose primary effects are on norepinephrine, including guanfacine, clonidine, and atomoxetine, have been tried as treatments for ADHD, and found to have effects comparable to those of stimulants. Autonomic failure Several conditions, including Parkinson's disease, diabetes, and so-called pure autonomic failure, can cause a loss of norepinephrine-secreting neurons in the sympathetic nervous system. The symptoms are widespread, the most serious being a reduction in heart rate and an extreme drop in resting blood pressure, making it impossible for severely affected people to stand for more than a few seconds without fainting. Treatment can involve dietary changes or drugs. REM sleep deprivation Norepinephrine prevents REM sleep, and lack of REM sleep increases noradrenaline secretion as a result of the locus coeruleus not ceasing producing it. It causes neurodegeneration if its loss is sustained for several days. Comparative biology and evolution Norepinephrine has been reported to exist in a wide variety of animal species, including protozoa, placozoa and cnidaria (jellyfish and related species), but not in ctenophores (comb jellies), whose nervous systems differ greatly from those of other animals. It is generally present in deuterostomes (vertebrates, etc.), but in protostomes (arthropods, molluscs, flatworms, nematodes, annelids, etc.) it is replaced by octopamine, a closely related chemical with a closely related synthesis pathway. In insects, octopamine has alerting and activating functions that correspond (at least roughly) with the functions of norepinephrine in vertebrates. It has been argued that octopamine evolved to replace norepinephrine rather than vice versa; however, the nervous system of amphioxus (a primitive chordate) has been reported to contain octopamine but not norepinephrine, which presents difficulties for that hypothesis. History Early in the twentieth century Walter Cannon, who had popularized the idea of a sympathoadrenal system preparing the body for fight and flight, and his colleague Arturo Rosenblueth developed a theory of two sympathins, sympathin E (excitatory) and sympathin I (inhibitory), responsible for these actions. The Belgian pharmacologist Zénon Bacq as well as Canadian and U.S. pharmacologists between 1934 and 1938 suggested that noradrenaline might be a sympathetic transmitter. In 1939, Hermann Blaschko and Peter Holtz independently identified the biosynthetic mechanism for norepinephrine in the vertebrate body. In 1945 Ulf von Euler published the first of a series of papers that established the role of norepinephrine as a neurotransmitter. He demonstrated the presence of norepinephrine in sympathetically innervated tissues and brain, and adduced evidence that it is the sympathin of Cannon and Rosenblueth. Stanley Peart was the first to demonstrate the release of noradrenaline after the stimulation of sympathetic nerves. References External links TAAR1 agonists Amphetamine Alpha-adrenergic agonists Beta-adrenergic agonists Neurotransmitters Hormones Biology of attention deficit hyperactivity disorder Catecholamines Peripherally selective drugs Phenylethanolamines Stress hormones
Norepinephrine
[ "Chemistry" ]
6,321
[ "Neurochemistry", "Neurotransmitters" ]
9,903,607
https://en.wikipedia.org/wiki/Insulin-like%20growth%20factor%202%20receptor
Insulin-like growth factor 2 receptor (IGF2R), also called the cation-independent mannose-6-phosphate receptor (CI-MPR) is a protein that in humans is encoded by the IGF2R gene. IGF2R is a multifunctional protein receptor that binds insulin-like growth factor 2 (IGF2) at the cell surface and mannose-6-phosphate (M6P)-tagged proteins in the trans-Golgi network. Structure The structure of the IGF2R is a type I transmembrane protein (that is, it has a single transmembrane domain with its C-terminus on the cytoplasmic side of lipid membranes) with a large extracellular/lumenal domain and a relatively short cytoplasmic tail. The extracellular domain consists of a small region homologous to the collagen-binding domain of fibronectin and of fifteen repeats of approximately 147 amino acid residues. Each of these repeats is homologous to the 157-residue extracytoplasmic domain of the mannose 6-phosphate receptor. Binding to IGF2 is mediated through one of the repeats, while two different repeats are responsible for binding to mannose-6-phosphate. The IGF2R is approximately 300 kDa in size; it appears to exist and function as a dimer. Function IGF2R functions to clear IGF2 from the cell surface to attenuate signalling, and to transport lysosomal acid hydrolase precursors from the Golgi apparatus to the lysosome. After binding IGF2 at the cell surface, IGF2Rs accumulate in forming clathrin-coated vesicles and are internalized. In the lumen of the trans-Golgi network, the IGF2R binds M6P-tagged cargo. The IGF2Rs (bound to their cargo) are recognized by the GGA family of clathrin adaptor proteins and accumulate in forming clathrin-coated vesicles. IGF2Rs from both the cell surface and the Golgi are trafficked to the early endosome where, in the relatively low pH environment of the endosome, the IGF2Rs release their cargo. The IGF2Rs are recycled back to the Golgi by the retromer complex, again by way of interaction with GGAs and vesicles. The cargo proteins are then trafficked to the lysosome via the late endosome independently of the IGF2Rs. Interactions Insulin-like growth factor 2 receptor has been shown to interact with M6PRBP1. Evolution The insulin-like growth factor 2 receptor function evolved from the cation-independent mannose 6-phosphate receptor and is first seen in Monotremes. The IGF-2 binding site was likely acquired fortuitously with the generation of an exonic splice site enhancer cluster in exon 34, presumably necessitated by several kilobases of repeat element insertions in the preceding intron. A six-fold affinity maturation then followed during therian evolution, coincident with the onset of imprinting and consistent with the theory of parental conflict. See also Cluster of differentiation IGF-1 Receptor Mannose 6-phosphate receptor References Further reading External links Receptors Clusters of differentiation
Insulin-like growth factor 2 receptor
[ "Chemistry" ]
693
[ "Receptors", "Signal transduction" ]
9,904,516
https://en.wikipedia.org/wiki/Tin%20can%20wall
A tin can wall is a wall constructed from tin cans, which are not a common building source. The cans can be laid in concrete, stacked vertically on top of each other, and crushed or cut and flattened to be used as shingles. They can also be used for furniture. Tin cans can form the actual fill-in structure (or walls) of a building, as is done with earthships. Tin cans have not been around for a long time, and neither have their building methods. The two main structural methods for building with tin cans are by laying them horizontally in a concrete matrix and by stacking them vertically. History Tin can building in New Mexico originated in the early 1980s as a response to the massive amounts of trash being discarded and the wasteful nature of common building practices. Tin can construction was an attempt to utilize a readily available resource that was normally sent to landfills or recycling centers. This led to various experiments in tin can building, including space-filler between wooden frames in traditional house styles and creating domes and archways using cans and cement. Within time, more simplified and practical methods were developed, such as the earthship tin can wall. The main person behind these efforts was Mike Reynolds, also creator of the earthship building method. Construction A “traditional” earthship tin can wall is made by horizontally stacking tin cans in a concrete matrix. The cans are laid side by side and in alternating rows, similar to bricks. This is done simply and efficiently, using batches of concrete between the cans. The consistency of the concrete must be relatively thick, so as to hold its form and the tin cans in place. A surprisingly large number of cans are required. The method for stacking the cans involves creating a row of cans separated by hand-formed “lumps” of concrete. The layout of a row is can, concrete, can, etc. This is then repeated, except that the alternating pattern is reversed, so that every can is laid on top of a concrete “lump” below it. This continues until completed, or the weight of the wall and the hardness of the cement seem questionable in terms of solidity. At that point it would be wise to wait for the wall to harden, but the laying time for cans and concrete is such that by the time a builder makes it back to an area that was recently laid it has had time to set. It is a judgment call as to whether or not the builder should continue, but by the next day or even later in the same day building can resume. Materials The materials that go into a tin can wall are simple: mainly tin cans and concrete. Tin cans (now aluminum cans as real tin cans are not as readily available) can be acquired from any recycling center or a local bar. Brick mortar may also be used instead of cement. Coating Once the wall is completed, the cans and the concrete are covered with a layer of cement or adobe mud mixture. What is applied depends on the location of the wall; if it is located in an area where it will be exposed to water (such as in a bathroom or utility room) it will need to be coated with a concrete layer. If it is located in a living room or bedroom, it can be covered with adobe plaster. A tin can wall that has half of its structure outside (such as the wall of an entrance to a building) will be coated with cement on the outside and adobe on the inside. The shape of the cans (their pull-tabs, etc.) and the roughness of the cement will provide a lath-like surface for the cement or adobe to stick to. This initial layer is “screeded” (scratched with a tool that creates a ridge-like pattern thereby making it easier to apply another coat) and a second layer is added. More layers may be added, but it is up to the builder’s judgment and dependent upon the material being applied. In the case of adobe mud, once the initial layer is applied and allowed to harden it will crack and will need additional coats. With cement fewer layers are needed. The basic rule is: the more coats/layers, the stronger and better-looking the wall will be. This can be overdone however. When a tin can wall has been sufficiently coated it will then be “finished” with a fine plaster (lime-based or other), stucco (if the wall is outside), or linseed oil (in the case of adobe). It can also be finished with a clay “slip”, or aliz, which is an earth-based coat that can have natural pigment and fine grains of mica mixed in to produce a beautiful shimmering and organic-looking surface. Examples An outside tin can insulating wall is a simple design. It is made out of two tin can walls with a layer of solid insulation in the middle. The insulation can vary in thickness, depending on climate and budget. It can be made out of various “green” or sustainable materials or average run-of-the-mill solid insulation. The exposed sides of the tin can walls (those not facing the insulation) are finished using methods aforementioned. The inside part of the wall can be coated with adobe while the outside is finished with concrete and stucco. A door frame can be built into the can wall, or rather the can wall is built around the frame. The process involves initially having a door frame set in place (on the foundation) and stacking cans to either side of the frame until they reach the other walls of the building and the ceiling. The door frame is fastened to the tin can wall by hammering nails partially into the side of the frame that will touch the tin can wall and allowing the concrete to harden around the nails. Short strips of metal lath are also attached to the frame and folded out (perpendicular to the frame) and allowed to set in the can/concrete matrix. The same method is applied to windows. The only difference is all sides of window are fastened to the tin can wall, while the door frame is fastened to the foundation on one side (bottom) and the can wall on three sides. Metal lath and nails are all that is needed, along with a bubble level or similar device. Once the desired height is reached to install a window frame, the wall is leveled. If any cans stick above the level plane they can be flattened to the desired height. Nails and lath sticking out from under the window frame holds the bottom of it in place, and the sides and top of the frame are fastened in the same fashion as a door frame. To make a smooth transition from door (or window frame) to tin can wall with plaster, sheets of metal lath are attached to the rim of the frame and folded over the gap between the frame and the can wall. A double-layered wooden frame is therefore required, to give a surface for the metal lath to be nailed to while leaving the inside frame untouched. However, this is not necessarily a necessity. Electrical wiring is simple, with the wires attached to the cans or fastened to the concrete before the initial coat. If a wire needs to go to the other side of a wall it can be punched directly through a can. Plumbing and pipework can use similar methods. The can wall can always be built around a pipe, or there can be a wooden frame made similar to a window or door to house the pipe. Strength and use Tin can walls are not considered load-bearing using this building method, although two-story circular dome structures have been built. The basic rule is that it can support considerable weight but should not be used to hold up much more than its own form and shape. It would not be wise to attach a heavy timber roof to a tin can wall without support beams or frames. The basic function for can walls is in-fill (filling in the space between support beams or the main structure) and the division of space. They work well to separate a living room from a bedroom, and are also used as insulating walls from the outside. An earthship tin can wall is both an efficient and economical building method. They are mainly composed of aluminum and cement, and can withstand the test of time. They are made from few materials (the coating method can be more complex than building the wall itself). They use recycled materials and require little or no skill to build. Alternative methods The other tin can wall method that will be briefly described is a system developed by a German artist named Michael Hönes. He has led community rebuilding efforts in Lesotho, Africa using tin cans to create housing and opportunities for Aids orphans and foster mothers. Known as the TCV (a.k.a. Tin-Can Villages) project, Hönes has created buildings using tin cans, masonite, paint, and wire. The roof is made out of corrugated metal shingles. In this method the cans are stacked vertically, one on top of the other in rows that are placed side by side and secured with wire. They are left exposed and are arranged in a decorative manner. The structures require no foundation, and are said to be able to withstand the Lesotho storms. A site for the first village in Maseru has been secured and the funding has been sourced. What is lacking is building permits (as of July 2004). The TCV organization, headed by Hönes, has been prefabricating tin can walls so that when the permits pass about one building a week can be constructed. So far the TCV organization’s efforts have been concentrated on storehouses, offices, a large weaving workshop for the women of the Elelloang Basali Weavers group in Teyateyaneng, and a solar-powered restaurant that cooks with solar ovens. Michael Hönes also focuses on tin can furniture and has created a stove out of tin cans that uses one-third less wood than what the poor people of the area commonly use, thereby diminishing the firewood crisis in Lesotho. See also Bottle wall Earthship References External links Michael Hönes' Alternative Methods Ideas For Building/Decorating With Cans Building engineering
Tin can wall
[ "Engineering" ]
2,057
[ "Building engineering", "Civil engineering", "Architecture" ]
9,906,981
https://en.wikipedia.org/wiki/OCRL
Inositol polyphosphate 5-phosphatase OCRL-1, also known as Lowe oculocerebrorenal syndrome protein, is an enzyme encoded by the OCRL gene located on the X chromosome in humans. This gene encodes an inositol polyphosphate 5-phosphatase. The responsible gene locus is at Xq26.1. This phosphatase enzyme is in part responsible for regulating membrane trafficking actin polymerization, and is located in several subcellular parts of the trans-Golgi network. Deficiencies in OCRL-1 are associated with oculocerebrorenal syndrome and also have been linked to Dent's disease. References Further reading External links GeneReviews/NCBI/NIH/UW entry on Lowe Syndrome PDBe-KB provides an overview of all the structure information available in the PDB for Human Inositol polyphosphate 5-phosphatase OCRL-1
OCRL
[ "Chemistry", "Biology" ]
214
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
3,245,940
https://en.wikipedia.org/wiki/Indium%20gallium%20nitride
Indium gallium nitride (InGaN, ) is a semiconductor material made of a mix of gallium nitride (GaN) and indium nitride (InN). It is a ternary group III/group V direct bandgap semiconductor. Its bandgap can be tuned by varying the amount of indium in the alloy. InxGa1−xN has a direct bandgap span from the infrared (0.69 eV) for InN to the ultraviolet (3.4 eV) of GaN. The ratio of In/Ga is usually between 0.02/0.98 and 0.3/0.7. Applications LEDs Indium gallium nitride is the light-emitting layer in modern blue and green LEDs and often grown on a GaN buffer on a transparent substrate as, e.g. sapphire or silicon carbide. It has a high heat capacity and its sensitivity to ionizing radiation is low (like other group III nitrides), making it also a potentially suitable material for solar photovoltaic devices, specifically for arrays for satellites. It is theoretically predicted that spinodal decomposition of indium nitride should occur for compositions between 15% and 85%, leading to In-rich and Ga-rich InGaN regions or clusters. However, only a weak phase segregation has been observed in experimental local structure studies. Other experimental results using cathodoluminescence and photoluminescence excitation on low In-content InGaN multi-quantum wells have demonstrated that providing correct material parameters of the InGaN/GaN alloys, theoretical approaches for AlGaN/GaN systems also apply to InGaN nanostructures. GaN is a defect-rich material with typical dislocation densities exceeding 108 cm−2. Light emission from InGaN layers grown on such GaN buffers used in blue and green LEDs is expected to be attenuated because of non-radiative recombination at such defects. Nevertheless, InGaN quantum wells, are efficient light emitters in green, blue, white and ultraviolet light-emitting diodes and diode lasers. The indium-rich regions have a lower bandgap than the surrounding material and create regions of reduced potential energy for charge carriers. Electron-hole pairs are trapped there and recombine with emission of light, instead of diffusing to crystal defects where the recombination is non-radiative. Also, self-consistent computer simulations have shown that radiative recombination is focused where regions are rich of indium. The emitted wavelength, dependent on the material's band gap, can be controlled by the GaN/InN ratio, from near ultraviolet for 0.02In/0.98Ga through 390 nm for 0.1In/0.9Ga, violet-blue 420 nm for 0.2In/0.8Ga, to blue 440 nm for 0.3In/0.7Ga, to red for higher ratios and also by the thickness of the InGaN layers which are typically in the range of 2–3 nm. However, atomistic simulations results have shown that emission energies have a minor dependence on small variations of device dimensions. Studies based on device simulation have shown that it could be possible to increase InGaN/GaN LED efficiency using band gap engineering, especially for green LEDs. Photovoltaics The ability to perform bandgap engineering with InGaN over a range that provides a good spectral match to sunlight, makes InGaN suitable for solar photovoltaic cells. It is possible to grow multiple layers with different bandgaps, as the material is relatively insensitive to defects introduced by a lattice mismatch between the layers. A two-layer multijunction cell with bandgaps of 1.1 eV and 1.7 eV can attain a theoretical 50% maximum efficiency, and by depositing multiple layers tuned to a wide range of bandgaps an efficiency up to 70% is theoretically expected. Significant photoresponse was obtained from experimental InGaN single-junction devices. In addition to controlling the optical properties, which results in band gap engineering, photovoltaic device performance can be improved by engineering the microstructure of the material to increase the optical path length and provide light trapping. Growing nanocolumns on the device can further result in resonant interaction with light, and InGaN nanocolumns have been successfully deposited on using plasma enhanced evaporation. Nanorod growth may also be advantageous in the reduction of treading dislocations which may act as charge traps reducing solar cell efficiency Metal-modulated epitaxy allows controlled atomic layer-by-layer growth of thin films with almost ideal characteristics enabled by strain relaxation at the first atomic layer. The crystal's lattice structures match up, resembling a perfect crystal, with corresponding luminosity. The crystal had indium content ranging from x ~ 0.22 to 0.67. Significant improvement in the crystalline quality and optical properties began at x ~ 0.6. Films were grown at ~400 °C to facilitate indium incorporation and with precursor modulation to enhance surface morphology and metal adlayer diffusion. These findings should contribute to the development of growth techniques for nitride semiconductors under high lattice misfit conditions. Quantum heterostructures Quantum heterostructures are often built from GaN with InGaN active layers. InGaN can be combined with other materials, e.g. GaN, AlGaN, on SiC, sapphire and even silicon. Nanorods InGaN nanorod LEDs are three-dimensional structures with a larger emitting surface, better efficiency and greater light emission compared to planar LEDs . Safety and toxicity The toxicology of InGaN has not been fully investigated. The dust is an irritant to skin, eyes and lungs. The environment, health and safety aspects of indium gallium nitride sources (such as trimethylindium, trimethylgallium and ammonia) and industrial hygiene monitoring studies of standard MOVPE sources have been reported recently in a review. See also Indium gallium phosphide Indium gallium arsenide References Indium compounds Gallium compounds Nitrides III-V semiconductors III-V compounds Light-emitting diode materials
Indium gallium nitride
[ "Chemistry" ]
1,294
[ "Inorganic compounds", "Semiconductor materials", "III-V semiconductors", "Light-emitting diode materials", "III-V compounds" ]
3,246,329
https://en.wikipedia.org/wiki/Hearing%20range
Hearing range describes the frequency range that can be heard by humans or other animals, though it can also refer to the range of levels. The human range is commonly given as 20 to 20,000 Hz, although there is considerable variation between individuals, especially at high frequencies, and a gradual loss of sensitivity to higher frequencies with age is considered normal. Sensitivity also varies with frequency, as shown by equal-loudness contours. Routine investigation for hearing loss usually involves an audiogram which shows threshold levels relative to a normal. Several animal species can hear frequencies well beyond the human hearing range. Some dolphins and bats, for example, can hear frequencies over 100 kHz. Elephants can hear sounds at 16 Hz–12 kHz, while some whales can hear infrasonic sounds as low as 7 Hz. Physiology The 'hairs' in hair cells in the inner ear, stereocilia, range in height from 1 μm, for auditory detection of very high frequencies, to 50 μm or more in some vestibular systems. Measurement A basic measure of hearing is afforded by an audiogram, a graph of the absolute threshold of hearing (minimum discernible sound level) at various frequencies throughout an organism's nominal hearing range. Behavioural hearing tests or physiological tests can be used to find the hearing thresholds of humans and other animals. For humans, the test involves tones being presented at specific frequencies (pitch) and intensities (loudness). When the subject hears the sound, they indicate this by raising a hand or pressing a button. The lowest intensity they can hear is recorded. The test varies for children; their response to the sound can be indicated by a turn of the head or by using a toy. The child learns what to do upon hearing the sound, such as placing a toy man in a boat. A similar technique can be used when testing animals, where food is used as a reward for responding to the sound. The information on different mammals' hearing was obtained primarily by behavioural hearing tests. Physiological tests do not need the patient to respond consciously. Humans In humans, sound waves funnel into the ear via the external ear canal and reach the eardrum (tympanic membrane). The compression and rarefaction of these waves set this thin membrane in motion, causing sympathetic vibration through the middle ear bones (the ossicles: malleus, incus, and stapes), the basilar fluid in the cochlea, and the hairs within it, called stereocilia. These hairs line the cochlea from base to apex, and the part stimulated and the intensity of stimulation gives an indication of the nature of the sound. Information gathered from the hair cells is sent via the auditory nerve for processing in the brain. The commonly stated range of human hearing is 20 to 20,000 Hz. Under ideal laboratory conditions, humans can hear sound as low as 12 Hz and as high as 28 kHz, though the threshold increases sharply at 15 kHz in adults, corresponding to the last auditory channel of the cochlea. The human auditory system is most sensitive to frequencies between 2,000 and 5,000 Hz. Individual hearing range varies according to the general condition of a human's ears and nervous system. The range shrinks during life, usually beginning at around the age of eight with the upper frequency limit being reduced. Women lose their hearing somewhat less often than men. This is due to a lot of social and external factors. For example, men spend more time in noisy places, and this is associated not only with work but also with hobbies and other activities. Women have a sharper hearing loss after menopause. In women, hearing decrease is worse at low and partially medium frequencies, while men are more likely to suffer from hearing loss at high frequencies. Audiograms of human hearing are produced using an audiometer, which presents different frequencies to the subject, usually over calibrated headphones, at specified levels. The levels are weighted with frequency relative to a standard graph known as the minimum audibility curve, which is intended to represent "normal" hearing. The threshold of hearing is set at around 0 phon on the equal-loudness contours (i.e. 20 micropascals, approximately the quietest sound a young healthy human can detect), but is standardised in an ANSI standard to 1 kHz. Standards using different reference levels, give rise to differences in audiograms. The ASA-1951 standard, for example, used a level of 16.5 dB SPL (sound pressure level) at 1 kHz, whereas the later ANSI-1969/ISO-1963 standard uses , with a 10 dB correction applied for older people. Other primates Several primates, especially small ones, can hear frequencies far into the ultrasonic range. Measured with a signal, the hearing range for the Senegal bushbaby is 92 Hz–65 kHz, and 67 Hz–58 kHz for the ring-tailed lemur. Of 19 primates tested, the Japanese macaque had the widest range, 28 Hz–34.5 kHz, compared with 31 Hz–17.6 kHz for humans. Cats Cats have excellent hearing and can detect an extremely broad range of frequencies. They can hear higher-pitched sounds than humans or most dogs, detecting frequencies from 55 Hz up to 79 kHz. Cats do not use this ability to hear ultrasound for communication but it is probably important in hunting, since many species of rodents make ultrasonic calls. Cat hearing is also extremely sensitive and is among the best of any mammal, being most acute in the range of 500 Hz to 32 kHz. This sensitivity is further enhanced by the cat's large movable outer ears (their pinnae), which both amplify sounds and help a cat sense the direction from which a noise is coming. Dogs The hearing ability of a dog is dependent on breed and age, though the range of hearing is usually around 67 Hz to 45 kHz. As with humans, some dog breeds' hearing ranges narrow with age, such as the German shepherd and miniature poodle. When dogs hear a sound, they will move their ears towards it in order to maximize reception. In order to achieve this, the ears of a dog are controlled by at least 18 muscles, which allow the ears to tilt and rotate. The ear's shape also allows the sound to be heard more accurately. Many breeds often have upright and curved ears, which direct and amplify sounds. As dogs hear higher frequency sounds than humans, they have a different acoustic perception of the world. Sounds that seem loud to humans often emit high-frequency tones that can scare away dogs. Whistles which emit ultrasonic sound, called dog whistles, are used in dog training, as a dog will respond much better to such levels. In the wild, dogs use their hearing capabilities to hunt and locate food. Domestic breeds are often used to guard property due to their increased hearing ability. So-called "Nelson" dog whistles generate sounds at frequencies higher than those audible to humans but well within the range of a dog's hearing. Bats Bats have evolved very sensitive hearing to cope with their nocturnal activity. Their hearing range varies by species; at the lowest it can be 1 kHz for some species and for other species the highest reaches up to 200 kHz. Bats that can detect 200 kHz cannot hear very well below 10 kHz. In any case, the most sensitive range of bat hearing is narrower: about 15 kHz to 90 kHz. Bats navigate around objects and locate their prey using echolocation. A bat will produce a very loud, short sound and assess the echo when it bounces back. Bats hunt flying insects; these insects return a faint echo of the bat's call. The type of insect, how big it is and distance can be determined by the quality of the echo and time it takes for the echo to rebound. There are two types of call constant frequency (CF), and frequency modulated (FM) that descend in pitch. Each type reveals different information; CF is used to detect an object, and FM is used to assess its distance. The pulses of sound produced by the bat last only a few thousandths of a second; silences between the calls give time to listen for the information coming back in the form of an echo. Evidence suggests that bats use the change in pitch of sound produced via the Doppler effect to assess their flight speed in relation to objects around them. The information regarding size, shape and texture is built up to form a picture of their surroundings and the location of their prey. Using these factors a bat can successfully track change in movements and therefore hunt down their prey. Mice Mice have large ears in comparison to their bodies. They hear higher frequencies than humans; their frequency range is 1 kHz to 70 kHz. They do not hear the lower frequencies that humans can; they communicate using high-frequency noises some of which are inaudible by humans. The distress call of a young mouse can be produced at 40 kHz. The mice use their ability to produce sounds out of predators' frequency ranges to alert other mice of danger without exposing themselves, though notably, cats' hearing range encompasses the mouse's entire vocal range. The squeaks that humans can hear are lower in frequency and are used by the mouse to make longer distance calls, as low-frequency sounds can travel farther than high-frequency sounds. Birds Hearing is birds' second most important sense and their ears are funnel-shaped to focus sound. The ears are located slightly behind and below the eyes, and they are covered with soft feathers – the auriculars – for protection. The shape of a bird's head can also affect its hearing, such as owls, whose facial discs help direct sound toward their ears. The hearing range of birds is most sensitive between 1 kHz and 4 kHz, but their full range is roughly similar to human hearing, with higher or lower limits depending on the bird species. No kind of bird has been observed to react to ultrasonic sounds, but certain kinds of birds can hear infrasonic sounds. "Birds are especially sensitive to pitch, tone and rhythm changes and use those variations to recognize other individual birds, even in a noisy flock. Birds also use different sounds, songs and calls in different situations, and recognizing the different noises is essential to determine if a call is warning of a predator, advertising a territorial claim or offering to share food." "Some birds, most notably oilbirds, also use echolocation, just as bats do. These birds live in caves and use their rapid chirps and clicks to navigate through dark caves where even sensitive vision may not be useful enough." Pigeons can hear infrasound. With the average pigeon being able to hear sounds as low as 0.5 Hz, they can detect distant storms, earthquakes and even volcanoes. This also helps them to navigate. Insects Greater wax moths (Galleria mellonella) have the highest recorded sound frequency range that has been recorded so far. They can hear frequencies up to 300 kHz. This is likely to help them evade bats. Fish Fish have a narrow hearing range compared to most mammals. Goldfish and catfish do possess a Weberian apparatus and have a wider hearing range than the tuna. Marine mammals As aquatic environments have very different physical properties than land environments, there are differences in how marine mammals hear compared with land mammals. The differences in auditory systems have led to extensive research on aquatic mammals, specifically on dolphins. Researchers customarily divide marine mammals into five hearing groups based on their range of best underwater hearing. (Ketten, 1998): Low-frequency baleen whales like blue whales (7 Hz to 35 kHz); Mid-frequency toothed whales like most dolphins and sperm whales (150 Hz to 160 kHz) ; High-frequency toothed whales like some dolphins and porpoises (275 Hz to 160 kHz); seals (50 Hz to 86 kHz); fur seals and sea lions (60 Hz to 39 kHz). The auditory system of a land mammal typically works via the transfer of sound waves through the ear canals. Ear canals in seals, sea lions, and walruses are similar to those of land mammals and may function the same way. In whales and dolphins, it is not entirely clear how sound is propagated to the ear, but some studies strongly suggest that sound is channelled to the ear by tissues in the area of the lower jaw. One group of whales, the Odontocetes (toothed whales), use echolocation to determine the position of objects such as prey. The toothed whales are also unusual in that the ears are separated from the skull and placed well apart, which assists them with localizing sounds, an important element for echolocation. Studies have found there to be two different types of cochlea in the dolphin population. Type I has been found in the Amazon river dolphin and harbour porpoises. These types of dolphin use extremely high frequency signals for echolocation. Harbour porpoises emit sounds at two bands, one at 2 kHz and one above 110 kHz. The cochlea in these dolphins is specialised to accommodate extreme high frequency sounds and is extremely narrow at the base. Type II cochlea are found primarily in offshore and open water species of whales, such as the bottlenose dolphin. The sounds produced by bottlenose dolphins are lower in frequency and range typically between 75 and 150,000 Hz. The higher frequencies in this range are also used for echolocation and the lower frequencies are commonly associated with social interaction as the signals travel much farther distances. Marine mammals use vocalisations in many different ways. Dolphins communicate via clicks and whistles, and whales use low-frequency moans or pulse signals. Each signal varies in terms of frequency and different signals are used to communicate different aspects. In dolphins, echolocation is used in order to detect and characterize objects and whistles are used in sociable herds as identification and communication devices. See also Audiology Audiometry Auditory system Diplacusis The Mosquito Safe listening Seismic communication Minimum audibility curve Musical acoustics Notes References Further reading Otology Zoology Acoustics Audiology
Hearing range
[ "Physics", "Biology" ]
2,880
[ "Zoology", "Classical mechanics", "Acoustics" ]
3,246,750
https://en.wikipedia.org/wiki/Masson%27s%20trichrome%20stain
Masson's trichrome is a three-colour staining procedure used in histology. The recipes emerged from Claude L. Pierre Masson's (1880–1959) original formulation have different specific applications, but all are suited for distinguishing cells from surrounding connective tissue. Most recipes produce red keratin and muscle fibers, blue or green collagen and bone, light red or pink cytoplasm, and dark brown to black cell nuclei. The trichrome is applied by immersion of the fixated sample into Weigert's iron hematoxylin, and then three different solutions, labeled A, B, and C: Weigert's hematoxylin is a sequence of three solutions: ferric chloride in diluted hydrochloric acid, hematoxylin in 95% ethanol, and potassium ferricyanide solution alkalized by sodium borate. It is used to stain the nuclei. Solution A, also called plasma stain, contains acid fuchsin, Xylidine Ponceau, glacial acetic acid, and distilled water. Other red acid dyes can be used, e.g. the Biebrich scarlet in Lillie's trichrome. Solution B contains phosphomolybdic/ phosphotungstic acid in distilled water. Solution C, also called fibre stain, contains Light Green SF yellowish, or alternatively Fast Green FCF. It is used to stain collagen. If blue is preferred to green, methyl blue or water blue can be substituted. Standard applications: Masson's trichrome staining is widely used to study muscular pathologies (muscular dystrophy), cardiac pathologies (infarct), hepatic pathologies (cirrhosis) or kidney pathologies (glomerular fibrosis). It can also be used to detect and analyze tumors on hepatic and kidney biopsies. Variants A common variant is Lillie's trichrome. It is often erroneously called Masson's trichrome. It differs in the dyes used, their concentrations, and the immersion times. Another common variant is the Masson trichrome & Verhoeff stain, which combines the Masson trichrome stain and Verhoeff's stain. This combination is useful for the examination of blood vessels; the Verhoeff stain highlights elastin (black) and allows one to easily differentiate small arteries (which typically have at least two elastic laminae) and veins (which have one elastic lamina). See also Collagen Hybridizing Peptide, a peptide that can bind to denatured collagen in tissues References External links Masson's Trichrome Protocol Trichrome Stain Stainsfile: Masson's Trichrome Staining
Masson's trichrome stain
[ "Chemistry", "Biology" ]
578
[ "Staining", "Microbiology techniques", "Cell imaging", "Microscopy" ]
3,247,517
https://en.wikipedia.org/wiki/Deubiquitinating%20enzyme
Deubiquitinating enzymes (DUBs), also known as deubiquitinating peptidases, deubiquitinating isopeptidases, deubiquitinases, ubiquitin proteases, ubiquitin hydrolases, or ubiquitin isopeptidases, are a large group of proteases that cleave ubiquitin from proteins. Ubiquitin is attached to proteins in order to regulate the degradation of proteins via the proteasome and lysosome; coordinate the cellular localisation of proteins; activate and inactivate proteins; and modulate protein-protein interactions. DUBs can reverse these effects by cleaving the peptide or isopeptide bond between ubiquitin and its substrate protein. In humans there are nearly 100 DUB genes, which can be classified into two main classes: cysteine proteases and metalloproteases. The cysteine proteases comprise ubiquitin-specific proteases (USPs), ubiquitin C-terminal hydrolases (UCHs), Machado-Josephin domain proteases (MJDs) and ovarian tumour proteases (OTU). The metalloprotease group contains only the Jab1/Mov34/Mpr1 Pad1 N-terminal+ (MPN+) (JAMM) domain proteases. Classes In humans there are 102 putative DUB genes, which can be classified into two main classes: cysteine proteases and metalloproteases, consisting of 58 ubiquitin-specific proteases (USPs), 4 ubiquitin C-terminal hydrolases (UCHs), 5 Machado-Josephin domain proteases (MJDs), 14 ovarian tumour proteases (OTU), and 14 Jab1/Mov34/Mpr1 Pad1 N-terminal+ (MPN+) (JAMM) domain-containing genes. 11 of these proteins are predicted to be non-functional, leaving 79 functional enzymes. In yeast, the USPs are known as ubiquitin-specific-processing proteases (UBPs). Cysteine proteases There are six main superfamilies of cysteine protease DUBs: the ubiquitin-specific protease (USP/UBP) superfamily; (USP1, USP2, USP3, USP4, USP5, USP6, USP7, USP8, USP9X, USP9Y, USP10, USP11, USP12, USP13, USP14, USP15, USP16, USP17, USP17L2, USP17L3, USP17L4, USP17L5, USP17L7, USP17L8, USP18, USP19, USP20, USP21, USP22, USP23, USP24, USP25, USP26, USP27X, USP28, USP29, USP30, USP31, USP32, USP33, USP34, USP35, USP36, USP37, USP38, USP39, USP40, USP41, USP42, USP43, USP44, USP45, USP46) the ovarian tumour (OTU) superfamily (OTUB1, OTUB2); and the Machado-Josephin domain (MJD) superfamily. (ATXN3, ATXN3L) the ubiquitin C-terminal hydrolase (UCH) superfamily; (BAP1, UCHL1, UCHL3, UCHL5) the MINDY family of K48-specific deubiquitinases; (MINDY1, MINDY2, MINDY3, MINDY4) the recently discovered ZUFSP family, at present solely represented by ZUP1 There is also a little known putative group of DUBs called the permutated papain fold peptidases of dsDNA viruses and eukaryote (PPPDEs) superfamily, which, if shown to be bona fide DUBs, would be the seventh in the cysteine protease class. Metalloproteases The Jab1/Mov34/Mpr1 Pad1 N-terminal+ (MPN+) (JAMM) domain superfamily proteins bind zinc and hence are metalloproteases. Role of deubiquitinating enzymes DUBs play several roles in the ubiquitin pathway. One of the best characterised functions of DUBs is the removal of monoubiqutin and polyubiquitin chains from proteins. These modifications are a post translational modification (addition to a protein after it has been made) where single ubiquitin proteins or chains of ubiquitin are added to lysines of a substrate protein. These ubiquitin modifications are added to proteins by the ubiquitination machinery; ubiquitin-activating enzymes (E1s), ubiquitin-conjugating enzymes (E2s) and ubiquitin ligases (E3s). The result is ubiquitin bound to lysine residues via an isopeptide bond. Proteins are affected by these modifications in a number of ways: they regulate the degradation of proteins via the proteasome and lysosome; coordinate the cellular localisation of proteins; activate and inactivate proteins; and modulate protein-protein interactions. DUBs play the antagonistic role in this axis by removing these modifications, therefore reversing the fate of the proteins. In addition, a less understood role of DUBs is the cleavage of ubiquitin-like proteins such as SUMO and NEDD8. Some DUBs may have the ability to cleave isopeptide bonds between these proteins and substrate proteins. They activate ubiquitin by the proteolysis (breaking down) of the inactive expressed forms of ubiquitin. Ubiquitin is encoded in mammals by 4 different genes: UBA52, RPS27A, UBB and UBC. A similar set of genes is found in other eukaryotes such as yeast. The UBA52 and RPS27A genes produce ubiquitin that is fused to ribosomal proteins and the UBB and UBC genes produce polyubiquitin (a chain of ubiquitin joined by their C- and N-termini). DUBs cleave the ubiquitin from these proteins, producing active single units of ubiquitin. DUBs also cleave single ubiquitin proteins that may have had their C-terminal tails accidentally bound to small cellular nucleophiles. These ubiquitin-amides and ubiquitin-thioesters may be formed during standard ubiquitination reactions by the E1-E2-E3 cascade. Glutathione and polyamines are two nucleophiles that might attack the thiolester bond between ubiquitin and these enzymes. Ubiquitin C-terminal hydrolase is an example of the DUB that hydrolyses these bonds with broad specificity. Free polyubiquitin chains are cleaved by DUBs to produce monoubiquitin. The chains may be produced by the E1-E2-E3 machinery in the cell free from any substrate protein. Another source of free polyubiquitin is the product of ubiquitin-substrate cleavage. If DUBs cleave the base of the polyubiquitin chain that is attached to a protein, the whole chain will become free and needs to be recycled by DUBs. Domains DUBs often contain a catalytic domain surrounded by one or more accessory domains, some of which contribute to target recognition. These additional domains include domain present in ubiquitin-specific proteases (DUSP) domain; ubiquitin-like (UBL) domain; meprin and TRAF homology (MATH) domain; zinc-finger ubiquitin-specific protease (ZnF-UBP) domain; zinc-finger myeloid, nervy and DEAF1 (ZnF-MYND) domain; ubiquitin-associated (UBA) domain; CHORD-SGT1 (CS) domain; microtubule-interacting and trafficking (MIT) domain; rhodenase-like domain; TBC/RABGAP domain; and B-box domain. Catalytic domain The catalytic domain of DUBs is what classifies them into particular groups; USPs, OTUs, MJDs, UCHs and MPN+/JAMMs. The first 4 groups are cysteine proteases, whereas the latter is a zinc metalloprotease. The cysteine protease DUBs are papain-like and thus have a similar mechanism of action. They use either catalytic dyads or triads (either two or three amino acids) to catalyse the hydrolysis of the amide bonds between ubiquitin and the substrate. The active site residues that contribute to the catalytic activity of the cysteine protease DUBs are cysteine (dyad/triad), histidine (dyad/triad) and aspartate or asparagine (triad only). The histidine is polarised by the aspartate or asparagine in catalytic triads or by other ways in dyads. This polarised residue lowers the pKa of the cysteine, allowing it to perform a nucleophilic attack on the isopeptide bond between the ubiquitin C-terminus and the substrate lysine. Metalloproteases coordinate zinc ions with histidine, aspartate and serine residues, which activate water molecules and allows them to attack the isopeptide bond. UBL Ubiquitin-like (UBL) domains have a similar structure (fold) to ubiquitin, except they lack the terminal glycine residues. 18 USPs are proposed to have UBL domains. Only 2 other DUBs have UBLs outside the USP group: OTU1 and VCPIP1. USP4, USP7, USP11, USP15, USP32, USP40 and USP47 have multiple UBL domains. Sometimes the UBL domains are in tandem, such as in USP7 where 5 tandem C-terminal UBL domains are present. USP4, USP6, USP11, USP15, USP19, USP31, USP32 and USP43 have UBL domains inserted into the catalytic domain. The functions of UBL domains are different between USPs, but commonly they regulate USP catalytic activity. They can coordinate localisation at the proteasome (USP14); negatively regulate USPs by competing for the catalytic site of the USP (USP4), and induce conformational changes to increase catalytic activity (USP7). Like other UBL domains, the structure of USP UBL domains show a β-grasp fold. DUSP Single or multiple tandem DUSP domains of approximately 120 residues are found in six USPs. The function of the DUSP domain is currently unknown but it may play a role in protein-protein interaction, in particular to DUBs substrate recognition. This is predicted because of the hydrophobic cleft present in the DUSP domain of USP15 and that some protein interactions with DUSP containing USPs do not occur without these domains. The DUSP domain displays a novel tripod-like fold comprising three helices and an anti-parallel beta-sheet made of three strands. This fold resembles the legs (helices) and seat (beta-sheet) of the tripod. Within most DUSP domains in USPs there is a conserved sequence of amino acids known as the PGPI motif. This is a sequence of four amino acids; proline, glycine, proline and isoleucine, which packs against the three-helix bundle and is highly ordered. Role in disease The full extent of the role of DUBs in diseases remains to be elucidated. Their involvement in disease is predicted due to known roles in physiological processes that are involved in disease states; including cancer and neurological disorders. The enzyme USP28 is over-expressed in different types of cancer such as colon or lung. In addition, USP28 deubiquitinates and stabilizes important oncogenes such as c-Myc, Notch1, c-jun or ΔNp63. In squamous tumors, USP28 regulates the resistance to chemotherapy regulating DNA repair via ΔNp63-Fanconia anemia pathway axis. The deubiquitinating enzymes UCH-L3 and YUH1 are able to hydrolyse mutant ubiquitin UBB+1 despite the fact that the glycine at position 76 is mutated. UCH-L1 levels are high in various types of malignancies (cancer). Role in the cell cycle DUBs play an active role in modulating the cell cycle. Ubiquitin-specific-processing protease (USP) is a family of deubiquitinating enzymes that play a crucial role in cell cycle regulation. Two such enzymes include USP17 and USP44. USP17 regulates pathways responsible for progressing cells through the cell cycle. Its targets include regulators of Ras, CDK2, and Cyclin A. USP44 plays an important role in anaphase initiation. New research into the mitotic checkpoint has revealed a novel role for USP44 in regulating cell cycle progression. USP regulation of Ras The ERK Pathway allows for the transduction of external mitogenic signals into intracellular signals promoting cellular proliferation. One of the key regulators of this pathways is Ras, a GTPase that, upon activation, binds GTP to "turn on" the subsequent signaling cascade. Ras converting enzyme 1 (RCE1) post-translationally cleaves the 3 residues on the C-terminus of Ras, allowing Ras to properly localize to the plasma membrane. USP17 acts to deubiquitinate K63-ubiquitin domains on RCE1. Such stabilization of RCE1 allows for proper localization of Ras, thus promoting proliferation upon activation of early receptors in the ERK Pathway. Ras hyperactivity can result in cell cycle dysregulation. Thus, regulation of Ras through USP17 acts as another point in Ras regulation. USP regulation of G1-S transition Cyclin-dependent kinases (CDKs) are a family of enzymes that phosphorylate serine and threonine residues to drive the cell through the cell cycle. Activation of CDK2 is critical for the G1-S transition. For CDK2 to be activated, cyclin A must bind to the cyclin-dependent kinase complex (CDKC). Cell division cycle 25A (CDC25A) is a phosphatase that removes an inhibitory phosphate group from CDK2. While ubiquitination would mark CDC25A for degradation, thus blocking progression to S phase, USP17 deubiquitinates CDC25A. An increase in CDC25A stability promotes CDKC activity, thus driving the cell through the G1-S transition. USP17 also regulates cell cycle progression by acting on SETD8 to downregulate transcription of cyclin-dependent kinase inhibitor 1 (CDKN1A), also known as p21. CDKN1A binds to and inhibits CDK2 using its N-terminal binding domain, thus blocking progression through the G1-S transition. SETD8, a methyltransferase, uses S-Adenosyl methionine to methylate the Lys20 residue of histone 4, resulting in the condensation of chromosomes. This compaction of the DNA downregulates CDKN1A transcription. USP17 deubiquitinates SETD8, thus reducing its propensity for degradation and increasing its intracellular stability. The resulting downregulation in CDKN1A transcription promotes CDK2 activity, allowing the cell to progress through the G1-S transition. See schematic of the role of DUBs in the cell cycle regulation. USP44 in anaphase initiation The spindle checkpoint (also referred to as the mitotic checkpoint) ensures proper separation of chromosomes. Broadly, the mitotic checkpoint promotes fidelity in chromosomal segregation, increasing the likelihood that each daughter cell receives only one duplicated chromosome. Such a mechanism is crucial, as errors in chromosomal separation have been implicated in cancer, birth defects, and antibiotic resistance in pathogens. One of the core regulator proteins is the anaphase-promoting complex (APC/C). APC/C ubiquitinates securin. The resulting destruction of securing release separase, which hydrolyzes cohesion – the protein that binds sister chromatids together. New research from Stegmeier and colleagues published in the journal Nature demonstrates a crucial role for USP44 in regulating the spindle checkpoint. Using an shRNA screen, USP44 was identified to stabilize the inhibition of APC/C The binding of CDC20 to APC/C is required for the ubiquitination of securin. A protein called hMAD2 can form an inactive trimer with APC and CDC20, forming the hMAD2-CDC-APC complex. Upon the ubiquitination of CDC20 by UbcH10, hMAD2 dissociates, and APC/C becomes active. It is important to note that ubiquitination of CDC20 does not serve to mark it for degradation, but rather promote dissociation of hMAD2 from the hMAD2-CDC-APC complex. USP44, a ubiquitin-specific-processing protease, can stabilize the inactive hMAD2-CDC-APC complex by counteracting UbcH10 ubiquitination. This blocks hMAD2 dissociation and allows for proper regulation of APC/C, keeping it inactive until proper attachment of the mitotic spindle. Upon proper attachment, switch-like behavior allows for the activation of APC/C. This results in the cleavage of cohesion, allowing for the separation of sister chromatids. Role in p53-mediated DNA damage repair DNA damage can prove catastrophic for an organism. Mechanisms for DNA mutation include oxidative stress, DNA replication errors, exogenous carcinogens, radiation, and spontaneous base mutation. Upon DNA damage, cell cycle progression is halted to prevent propagation of the mutation. The TP53 gene (also known as p53) is crucial in ensuring the conservation of the genome. Deubiquitinating enzymes play an integral role in maintaining p53's function. In healthy cells, p53 activates the E3 ubiquitin ligase MDM2 which in turn ubiquitinates p53. This creates a negative feedback loop, whereby the degradation of p53 allows for cells to flow through the cell cycle. Upon DNA damage, Ubiquitin-specific-processing protease 7 (USP7) stabilizes p53 by cleaving ubiquitin. For USP7 to deubiquitinate p53, it must localize to the nucleus. However, no nuclear localization sequence (NLS) has been found. Despite no known NLS, one study showed that, upon deletion of USP7's N-terminus, no nuclear localization occurred. It is possible that other proteins facilitate nuclear entry of USP7. Once stabilized, p53 can exert its tumor suppression function. Downstream pathways of p53 act to either halt cell cycle progression in G1 or G2 phases of the cell cycle or promote cell-death, depending on the severity of the DNA damage. See schematic of the role of USP7 in the p53-dependent pathway. or promote cell-death, depending on the severity of the DNA damage. See schematic of the role of USP7 in the p53-dependent pathway. References Protein domains Enzymes
Deubiquitinating enzyme
[ "Biology" ]
4,337
[ "Protein domains", "Protein classification" ]
3,249,396
https://en.wikipedia.org/wiki/Azorubine
Azorubine is an azo dye consisting of two naphthalene subunits. It is a red solid. It is mainly used in foods that are heat-treated after fermentation. It has E number E122. Uses In the US, this color was listed in 1939 as Ext. D&C Red No. 10 for use in externally applied drugs and cosmetics. It was delisted in 1963 because no party was interested in supporting the studies needed to establish safety. It was not used in food in the US. In the EU, azorubine is known as E number E122, and is authorized for use in certain foods and beverages, such as cheeses, dried fruit, and some alcoholic beverages, and is permitted for use as an excipient in medications. There are no provisions for azorubine in the Codex Alimentarius. Safety Azorubine has shown no evidence of mutagenic or carcinogenic properties and an acceptable daily intake (ADI) of 0–4 mg/kg was established in 1983 by the WHO. In rare instances, it may cause skin and respiratory allergic reactions even to FDA approved dosages. No evidence supports broad claims that food coloring causes food intolerance and ADHD-like behavior in children. It is possible that certain food coloring may act as a trigger in those who are genetically predisposed, but the evidence is weak. References Food colorings Azo dyes Organic sodium salts Naphthalenesulfonates 1-Naphthols E-number additives Acid dyes
Azorubine
[ "Chemistry" ]
326
[ "Organic sodium salts", "Salts" ]
3,250,305
https://en.wikipedia.org/wiki/Rocuronium%20bromide
Rocuronium bromide (brand names Zemuron, Esmeron) is an aminosteroid non-depolarizing neuromuscular blocker or muscle relaxant used in modern anaesthesia to facilitate tracheal intubation by providing skeletal muscle relaxation, most commonly required for surgery or mechanical ventilation. It is used for standard endotracheal intubation, as well as for rapid sequence induction (RSI). Pharmacology Mechanism of action Rocuronium bromide is a competitive antagonist for the nicotinic acetylcholine receptors at the neuromuscular junction. Of the neuromuscular-blocking drugs it is considered to be a non-depolarizing neuromuscular junction blocker, because it acts by dampening the receptor action causing muscle relaxation, instead of continual depolarisation which is the mechanism of action of the depolarizing neuromuscular junction blockers, like succinylcholine. It was designed to be a weaker antagonist at the neuromuscular junction than pancuronium; hence its monoquaternary structure and its having an allyl group and a pyrrolidine group attached to the D ring quaternary nitrogen atom. Rocuronium has a rapid onset and intermediate duration of action. There is considered to be a risk of allergic reaction to the drug in some patients (particularly those with asthma), but a similar incidence of allergic reactions has been observed by using other members of the same drug class (non-depolarizing neuromuscular blocking drugs). The γ-cyclodextrin derivative sugammadex (trade name Bridion) is an agent to reverse the action of rocuronium by binding to it with high affinity. Sugammadex has been in use since 2009 in many European countries; however, it was turned down for approval twice by the US FDA due to concerns over allergic reactions and bleeding, but finally approved the medication for use during surgical procedures in the United States on December 15, 2015. The acetylcholinesterase inhibitor neostigmine can also be used as a reversal agent of rocuronium but is not as effective as sugammadex. Neostigmine is often still used due to its low cost compared with sugammadex. History It was introduced in 1994. Society and culture Executions On July 27, 2012, the U.S. state of Virginia replaced pancuronium bromide, one of the three drugs used in execution by lethal injection, with rocuronium bromide. On October 3, 2016, the U.S. state of Ohio announced that it would resume executions on January 12, 2017, using a combination of midazolam, rocuronium bromide, and potassium chloride. Prior to this, the last execution in Ohio was in January 2014. On August 24, 2017, the U.S. state of Florida executed Mark James Asay using a combination of etomidate, rocuronium bromide, and potassium acetate. Euthanasia Since 2016, rocuronium bromide has been the standard drug, along with propofol, administered to patients for euthanasia in Canada. Brand names Rocuronium bromide is marketed under the brand name Zemuron in the United States and Esmeron in most other countries. References Muscle relaxants Nicotinic antagonists Quaternary ammonium compounds 4-Morpholinyl compounds Drugs developed by Schering-Plough Drugs developed by Merck & Co. Pyrrolidines Acetate esters Chemical substances for emergency medicine Allyl compounds Neuromuscular blockers Lethal injection components
Rocuronium bromide
[ "Chemistry" ]
755
[ "Chemicals in medicine", "Chemical substances for emergency medicine" ]
3,250,638
https://en.wikipedia.org/wiki/Plasma%20osmolality
Plasma osmolality measures the body's electrolyte–water balance. There are several methods for arriving at this quantity through measurement or calculation. Osmolality and osmolarity are measures that are technically different, but functionally the same for normal use. Whereas osmolality (with an "l") is defined as the number of osmoles (Osm) of solute per kilogram of solvent (osmol/kg or Osm/kg), osmolarity (with an "r") is defined as the number of osmoles of solute per liter (L) of solution (osmol/L or Osm/L). As such, larger numbers indicate a greater concentration of solutes in the plasma. Measured osmolality (MO) Osmolality can be measured on an analytical instrument called an osmometer. It works on the method of depression of freezing point. Osmolality versus osmolarity Osmolarity is affected by changes in water content, as well as temperature and pressure. In contrast, osmolality is independent of temperature and pressure. For a given solution, osmolarity is slightly less than osmolality, because the total solvent weight (the divisor used for osmolality) excludes the weight of any solutes, whereas the total solution volume (used for osmolarity) includes solute content. Otherwise, one litre of plasma would be equivalent to one kilogram of plasma, and plasma osmolarity and plasma osmolality would be equal. However, at low concentrations (below about 500 mM), the mass of the solute is negligible compared to the mass of the solvent, and osmolarity and osmolality are very similar. Technically, the terms can be compared as follows: Therefore, bedside calculations are actually in units of osmolarity, whereas laboratory measurements will provide readings in units of osmolality. In practice, there is almost negligible difference between the absolute values of the different measurements. For this reason, both terms are often used interchangeably, even though they refer to different units of measurement. Ranges Human Normal human reference range of osmolality in plasma is about 275-299 milli-osmoles per kilogram. Nonhuman Plasma osmolarity of some reptiles, especial those from a freshwater aquatic environment, may be lower than that of mammals (e.g. < 260 mOsm/L) during favourable conditions. Consequently, solutions osmotically balanced for mammals (e.g., 0.9% normal saline) are likely to be mildly hypertonic for such animals. Many arid species of reptiles and hibernating uricotelic species allow major elevations of plasma osmolarity (e.g. > 400 mOsm/L) that could be fatal to some mammals. Deep-sea fish have adapted to the extreme hydrostatic pressures of depth through a number of factors, including increasing osmolality, with one of the deepest known fish in the world, the hadal snailfish (Notoliparis kermadecensis) having a recorded muscle osmolality of 991 ± 22 mOsmol/kg, almost four times the osmolality of mammals and three times that of shallow water fish species (typically 350 mOsmol/kg). Clinical relevance As cell membranes in general are freely permeable to water, the osmolality of the extracellular fluid (ECF) is approximately equal to that of the intracellular fluid (ICF). Therefore, plasma osmolality is a guide to intracellular osmolality. This is important, as it shows that changes in ECF osmolality have a great effect on ICF osmolality — changes that can cause problems with normal cell functioning and volume. If the ECF were to become too hypotonic, water would readily fill surrounding cells, increasing their volume and potentially lysing them (cytolysis). Many poisons, medications and diseases affect the balance between the ICF and ECF, affecting individual cells and homeostasis as a whole. Osmolality of blood increases with dehydration and decreases with overhydration. In normal people, increased osmolality in the blood will stimulate secretion of antidiuretic hormone (ADH). This will result in increased water reabsorption, more concentrated urine, and less concentrated blood plasma. A low serum osmolality will suppress the release of ADH, resulting in decreased water reabsorption and more concentrated plasma. Syndrome of inappropriate ADH secretion occurs when excessive release of antidiuretic hormone results in inappropriately elevated urine osmolality (>100 mOsmol/L) relative to the blood plasma, leading to hyponatraemia. This ADH secretion may occur in excessive amounts from the posterior pituitary gland, or from ectopic sources such as small-cell carcinoma of the lung. Elevation may be associated with stroke mortality. Calculated osmolarity (CO) In medical lab reports, this quantity often appears as "Osmo, Calc" or "Osmo (Calc)." According to the international SI unit use the following equation : Calculated osmolarity = 2 Na + Glucose + Urea (all in mmol/L) As Na+ is the major extracellular cation, the sum of osmolarity of all other anions can be assumed to be equal to natremia, hence [Na+]x2 ≈ [Na+] + [anions] To calculate plasma osmolality use the following equation (typical in the US): = 2[] + [Glucose]/18 + [ BUN ]/2.8 where [Glucose] and [BUN] are measured in mg/dL. If the patient has ingested ethanol, the ethanol level should be included in the calculated osmolarity: = 2[] + [Glucose]/18 + [ BUN ]/2.8 + [Ethanol]/3.7 Based on the molecular weight of ethanol the divisor should be 4.6 but empiric data shows that ethanol does not behave as an ideal osmole. Osmolar gap (OG) The osmolar gap is the difference between the measured osmolality and the calculated osmolarity. The difference in units is attributed to the difference in the way that blood solutes are measured in the laboratory versus the way they are calculated. The laboratory value measures the freezing point depression, properly called osmolality while the calculated value is given in units of osmolarity. Even though these values are presented in different units, when there is a small amount of solute compared to total volume of solution, the absolute values of osmolality vs. osmolarity are very close. Often, this results in confusion as to which units are meant. For practical purposes, the units are considered interchangeable. The resulting "osmolar gap" can be thought of as either osmolar or osmolal, since both units have been used in its derivation. Measured osmolality is abbreviated "MO", calculated osmolarity is abbreviated "CO", and the osmolality gap is abbreviated "OG". Clinically, the osmolar gap is used to detect the presence of an osmotically active particle that is not normally found in plasma, usually a toxic alcohol such as ethanol, methanol or isopropyl alcohol. See also Osmotic concentration Urine osmolality Serum Osmolarity vs. Osmolality References Blood tests
Plasma osmolality
[ "Chemistry" ]
1,614
[ "Blood tests", "Chemical pathology" ]
3,251,151
https://en.wikipedia.org/wiki/Quilt%20maple
Quilt or quilted maple refers to a type of figure in maple wood. It is seen on the tangential plane (flat-sawn) and looks like a wavy "quilted" pattern, often similar to ripples on water. The highest quality quilted figure is found in the Western Big Leaf species of maple. It is a distortion of the grain pattern itself. Prized for its beauty, it is used frequently in the manufacturing of musical instruments, especially guitars. See also Chatoyancy Flame maple References External links Maple Wood
Quilt maple
[ "Physics" ]
108
[ "Materials stubs", "Materials", "Matter" ]
2,362,255
https://en.wikipedia.org/wiki/Eckert%20number
The Eckert number (Ec) is a dimensionless number used in continuum mechanics. It expresses the relationship between a flow's kinetic energy and the boundary layer enthalpy difference, and is used to characterize heat transfer dissipation. It is named after Ernst R. G. Eckert. It is defined as where u is the local flow velocity of the continuum, cp is the constant-pressure local specific heat of the continuum, is the difference between wall temperature and local temperature. References Dimensionless numbers of fluid mechanics Dimensionless numbers of thermodynamics Continuum mechanics
Eckert number
[ "Physics", "Chemistry" ]
122
[ "Thermodynamic properties", "Physical quantities", "Dimensionless numbers of thermodynamics", "Continuum mechanics", "Classical mechanics", "Fluid dynamics stubs", "Fluid dynamics" ]
2,362,507
https://en.wikipedia.org/wiki/Uranium%E2%80%93lead%20dating
Uranium–lead dating, abbreviated U–Pb dating, is one of the oldest and most refined of the radiometric dating schemes. It can be used to date rocks that formed and crystallised from about 1 million years to over 4.5 billion years ago with routine precisions in the 0.1–1 percent range. The method is usually applied to zircon. This mineral incorporates uranium and thorium atoms into its crystal structure, but strongly rejects lead when forming. As a result, newly-formed zircon crystals will contain no lead, meaning that any lead found in the mineral is radiogenic. Since the exact rate at which uranium decays into lead is known, the current ratio of lead to uranium in a sample of the mineral can be used to reliably determine its age. The method relies on two separate decay chains, the uranium series from 238U to 206Pb, with a half-life of 4.47 billion years and the actinium series from 235U to 207Pb, with a half-life of 710 million years. Decay routes Uranium decays to lead via a series of alpha and beta decays, in which 238U and its daughter nuclides undergo a total of eight alpha and six beta decays, whereas 235U and its daughters only experience seven alpha and four beta decays. The existence of two 'parallel' uranium–lead decay routes (238U to 206Pb and 235U to 207Pb) leads to multiple feasible dating techniques within the overall U–Pb system. The term U–Pb dating normally implies the coupled use of both decay schemes in the 'concordia diagram' (see below). However, use of a single decay scheme (usually 238U to 206Pb) leads to the U–Pb isochron dating method, analogous to the rubidium–strontium dating method. Finally, ages can also be determined from the U–Pb system by analysis of Pb isotope ratios alone. This is termed the lead–lead dating method. Clair Cameron Patterson, an American geochemist who pioneered studies of uranium–lead radiometric dating methods, used it to obtain one of the earliest estimates of the age of the Earth in 1956 to be 4.550Gy ± 70My; a figure that has remained largely unchallenged since. Mineralogy Although zircon (ZrSiO4) is most commonly used, other minerals such as monazite (see: monazite geochronology), titanite, and baddeleyite can also be used. Where crystals such as zircon with uranium and thorium inclusions cannot be obtained, uranium–lead dating techniques have also been applied to other minerals such as calcite / aragonite and other carbonate minerals. These types of minerals often produce lower-precision ages than igneous and metamorphic minerals traditionally used for age dating, but are more commonly available in the geologic record. Mechanism During the alpha decay steps, the zircon crystal experiences radiation damage, associated with each alpha decay. This damage is most concentrated around the parent isotope (U and Th), expelling the daughter isotope (Pb) from its original position in the zircon lattice. In areas with a high concentration of the parent isotope, damage to the crystal lattice is quite extensive, and will often interconnect to form a network of radiation damaged areas. Fission tracks and micro-cracks within the crystal will further extend this radiation damage network. These fission tracks act as conduits deep within the crystal, providing a method of transport to facilitate the leaching of lead isotopes from the zircon crystal. Computation Under conditions where no lead loss or gain from the outside environment has occurred, the age of the zircon can be calculated by assuming exponential decay of uranium. That is where is the number of uranium atoms measured now. is the number of uranium atoms originally - equal to the sum of uranium and lead atoms measured now. is the decay rate of Uranium. is the age of the zircon, which one wants to determine. This gives which can be written as The more commonly used decay chains of Uranium and Lead gives the following equations: (The notation , sometimes used in this context, refers to radiogenic lead. For zircon, the original lead content can be assumed to be zero, and the notation can be ignored.) These are said to yield concordant ages (t from each equation 1 and 2). It is these concordant ages, plotted over a series of time intervals, that result in the concordant line. Loss (leakage) of lead from the sample will result in a discrepancy in the ages determined by each decay scheme. This effect is referred to as discordance and is demonstrated in Figure 1. If a series of zircon samples has lost different amounts of lead, the samples generate a discordant line. The upper intercept of the concordia and the discordia line will reflect the original age of formation, while the lower intercept will reflect the age of the event that led to open system behavior and therefore the lead loss; although there has been some disagreement regarding the meaning of the lower intercept ages. Undamaged zircon retains the lead generated by radioactive decay of uranium and thorium up to very high temperatures (about 900 °C), though accumulated radiation damage within zones of very high uranium can lower this temperature substantially. Zircon is very chemically inert and resistant to mechanical weathering – a mixed blessing for geochronologists, as zones or even whole crystals can survive melting of their parent rock with their original uranium–lead age intact. Thus, zircon crystals with prolonged and complicated histories can contain zones of dramatically different ages (usually with the oldest zone forming the core, and the youngest zone forming the rim of the crystal), and so are said to demonstrate "inherited characteristics". Unraveling such complexities (which can also exist within other minerals, depending on their maximum lead-retention temperature) generally requires in situ micro-beam analysis using, for example, ion microprobe (SIMS), or laser ICP-MS. References Radiometric dating
Uranium–lead dating
[ "Chemistry" ]
1,260
[ "Radiometric dating", "Radioactivity" ]
2,363,171
https://en.wikipedia.org/wiki/Centrifugal%20pump
Centrifugal pumps are used to transport fluids by the conversion of rotational kinetic energy to the hydrodynamic energy of the fluid flow. The rotational energy typically comes from an engine or electric motor. They are a sub-class of dynamic axisymmetric work-absorbing turbomachinery. The fluid enters the pump impeller along or near to the rotating axis and is accelerated by the impeller, flowing radially outward into a diffuser or volute chamber (casing), from which it exits. Common uses include water, sewage, agriculture, petroleum, and petrochemical pumping. Centrifugal pumps are often chosen for their high flow rate capabilities, abrasive solution compatibility, mixing potential, as well as their relatively simple engineering. A centrifugal fan is commonly used to implement an air handling unit or vacuum cleaner. The reverse function of the centrifugal pump is a water turbine converting potential energy of water pressure into mechanical rotational energy. History According to Reti, the first machine that could be characterized as a centrifugal pump was a mud lifting machine which appeared as early as 1475 in a treatise by the Italian Renaissance engineer Francesco di Giorgio Martini. True centrifugal pumps were not developed until the late 17th century, when Denis Papin built one using straight vanes. The curved vane was introduced by British inventor John Appold in 1851. Working principle Like most pumps, a centrifugal pump converts rotational energy, often from a motor, to energy in a moving fluid. A portion of the energy goes into kinetic energy of the fluid. Fluid enters axially through eye of the casing, is caught up in the impeller blades, and is whirled tangentially and radially outward until it leaves through all circumferential parts of the impeller into the diffuser part of the casing. The fluid gains both velocity and pressure while passing through the impeller. The doughnut-shaped diffuser, or scroll, section of the casing decelerates the flow and further increases the pressure. Description by Euler A consequence of Newton's second law of mechanics is the conservation of the angular momentum (or the “moment of momentum”) which is of fundamental significance to all turbomachines. Accordingly, the change of the angular momentum is equal to the sum of the external moments. Angular momentums at inlet and outlet, an external torque and friction moments due to shear stresses act on an impeller or a diffuser, where: is the fluid density (kg/m3) is the flow rate (m3/s) is the radius the absolute velocity vector is the peripheral circumferential velocity vector. Since no pressure forces are created on cylindrical surfaces in the circumferential direction, it is possible to write Eq. (1.10) as: Euler's pump equation Based on Eq. (1.13) Euler developed the head pressure equation created by the impeller see Fig.2.2 In Eq. (2) the sum of 4 front element number call static pressure, the sum of last 2 element number call velocity pressure look carefully on the Fig 2.2 and the detail equation. : theoretical head pressure: = between 9.78 and 9.82 m/s depending on latitude, conventional standard value of exactly 9.80665 m/s barycentric gravitational acceleration : peripheral circumferential velocity vector : inlet circumferential velocity vector : angular velocity : inlet relative velocity vector : outlet relative velocity vector inlet absolute velocity vector : outlet absolute velocity vector Velocity triangle The color triangle formed by velocity vectors is called the velocity triangle. This rule was helpful to detail Eq.(1) become Eq.(2) and wide explained how the pump works. Fig 2.3 (a) shows the velocity triangle of a forward-curved vane impeller; Fig 2.3 (b) shows the velocity triangle of a radial straight-vane impeller. It illustrates rather clearly energy added to the flow (shown in vector ) inversely change upon flow rate (shown in vector ). Efficiency factor where: is the mechanics input power required (in watts) is the fluid density (kg/m) is the standard acceleration of gravity (9.80665 m/s) is the energy head added to the flow (in metres) is the flow rate (in m/s) is the efficiency of the pump plant as a decimal The head added by the pump () is a sum of the static lift, the head loss due to friction and any losses due to valves or pipe bends are all expressed in metres of fluid. Power is more commonly expressed as kilowatts (103 W, kW) or horsepower. The value for the pump efficiency, , may be stated for the pump itself or as a combined efficiency of the pump and motor system. Vertical centrifugal pumps Vertical centrifugal pumps are also referred to as cantilever pumps. They utilize a unique shaft and bearing support configuration that allows the volute to hang in the sump while the bearings are outside the sump. This style of pump uses no stuffing box to seal the shaft but instead utilizes a "throttle bushing". A common application for this style of pump is in a parts washer. Froth pumps In the mineral industry, or in the extraction of oilsand, froth is generated to separate the rich minerals or bitumen from the sand and clays. Froth contains air that tends to block conventional pumps and cause loss of prime. Over history, industry has developed different ways to deal with this problem. In the pulp and paper industry holes are drilled in the impeller. Air escapes to the back of the impeller and a special expeller discharges the air back to the suction tank. The impeller may also feature special small vanes between the primary vanes called split vanes or secondary vanes. Some pumps may feature a large eye, an inducer or recirculation of pressurized froth from the pump discharge back to the suction to break the bubbles. Multistage centrifugal pumps A centrifugal pump containing two or more impellers is called a multistage centrifugal pump. The impellers may be mounted on the same shaft or on different shafts. At each stage, the fluid is directed to the center before making its way to the discharge on the outer diameter. For higher pressures at the outlet, impellers can be connected in series. For higher flow output, impellers can be connected in parallel. A common application of the multistage centrifugal pump is the boiler feedwater pump. For example, a 350 MW unit would require two feedpumps in parallel. Each feedpump is a multistage centrifugal pump producing 150 L/s at 21 MPa. All energy transferred to the fluid is derived from the mechanical energy driving the impeller. This can be measured at isentropic compression, resulting in a slight temperature increase (in addition to the pressure increase). Energy usage The energy usage in a pumping installation is determined by the flow required, the height lifted and the length and friction characteristics of the pipeline. The power required to drive a pump () is defined simply using SI units by: where: is the input power required (in watts) is the fluid density (in kilograms per cubic metre, kg/m) is the standard acceleration of gravity (9.80665 m/s) is the energy Head added to the flow (in metres) is the volumetric flow rate (in cubic metres per second, m3/s) is the efficiency of the pump plant as a decimal The head added by the pump () is a sum of the static lift, the head loss due to friction and any losses due to valves or pipe bends all expressed in metres of fluid. Power is more commonly expressed as kilowatts (10 W, kW) or horsepower (1 hp = ). The value for the pump efficiency, , may be stated for the pump itself or as a combined efficiency of the pump and motor system. The energy usage is determined by multiplying the power requirement by the length of time the pump is operating. Problems of centrifugal pumps These are some difficulties faced in centrifugal pumps: Cavitation—the net positive suction head (NPSH) of the system is too low for the selected pump Wear of the impeller—can be worsened by suspended solids or cavitation Corrosion inside the pump caused by the fluid properties Overheating due to low flow Leakage along rotating shaft. Lack of prime—centrifugal pumps must be filled (with the fluid to be pumped) in order to operate Surge Viscous liquids may reduce efficiency Other pump types may be more suitable for high pressure applications Large solids or debris may clog the pump Centrifugal pumps for solids control An oilfield solids control system needs many centrifugal pumps to sit on or in mud tanks. The types of centrifugal pumps used are sand pumps, submersible slurry pumps, shear pumps, and charging pumps. They are defined for their different functions, but their working principle is the same. Magnetically coupled pumps Magnetically coupled pumps, or magnetic drive pumps, vary from the traditional pumping style, as the motor is coupled to the pump by magnetic means rather than by a direct mechanical shaft. The pump works via a drive magnet, 'driving' the pump rotor, which is magnetically coupled to the primary shaft driven by the motor. They are often used where leakage of the fluid pumped poses a great risk (e.g., aggressive fluid in the chemical or nuclear industry, or electric shock - garden fountains). Other use cases include when corrosive, combustible, or toxic fluids must be pumped (e.g., hydrochloric acid, sodium hydroxide, sodium hypochlorite, sulfuric acid, ferric/ferrous chloride or nitric acid). They have no direct connection between the motor shaft and the impeller, so no stuffing box or gland is needed. There is no risk of leakage, unless the casing is broken. Since the pump shaft is not supported by bearings outside the pump's housing, support inside the pump is provided by bushings. The pump size of a magnetic drive pumps can go from few watts of power to a giant 1 MW. Priming The process of filling the pump with liquid is called priming. All centrifugal pumps require liquid in the liquid casing to prime. If the pump casing becomes filled with vapors or gases, the pump impeller becomes gas-bound and incapable of pumping. To ensure that a centrifugal pump remains primed and does not become gas-bound, most centrifugal pumps are located below the level of the source from which the pump is to take its suction. The same effect can be gained by supplying liquid to the pump suction under pressure supplied by another pump placed in the suction line. Self-priming centrifugal pump In normal conditions, common centrifugal pumps are unable to evacuate the air from an inlet line leading to a fluid level whose geodetic altitude is below that of the pump. Self-priming pumps have to be capable of evacuating air from the pump suction line without any external auxiliary devices. Centrifugal pumps with an internal suction stage such as water-jet pumps or side-channel pumps are also classified as self-priming pumps. Self-Priming centrifugal pumps were invented in 1935. One of the first companies to market a self-priming centrifugal pump was American Marsh in 1938. Centrifugal pumps that are not designed with an internal or external self-priming stage can only start to pump the fluid after the pump has initially been primed with the fluid. Sturdier but slower, their impellers are designed to move liquid, which is far denser than air, leaving them unable to operate when air is present. In addition, a suction-side swing check valve or a vent valve must be fitted to prevent any siphon action and ensure that the fluid remains in the casing when the pump has been stopped. In self-priming centrifugal pumps with a separation chamber the fluid pumped and the entrained air bubbles are pumped into the separation chamber by the impeller action. The air escapes through the pump discharge nozzle whilst the fluid drops back down and is once more entrained by the impeller. The suction line is thus continuously evacuated. The design required for such a self-priming feature has an adverse effect on pump efficiency. Also, the dimensions of the separating chamber are relatively large. For these reasons this solution is only adopted for small pumps, e.g. garden pumps. More frequently used types of self-priming pumps are side-channel and water-ring pumps. Another type of self-priming pump is a centrifugal pump with two casing chambers and an open impeller. This design is not only used for its self-priming capabilities but also for its degassing effects when pumping two-phase mixtures (air/gas and liquid) for a short time in process engineering or when handling polluted fluids, for example, when draining water from construction pits. This pump type operates without a foot valve and without an evacuation device on the suction side. The pump has to be primed with the fluid to be handled prior to commissioning. Two-phase mixture is pumped until the suction line has been evacuated and the fluid level has been pushed into the front suction intake chamber by atmospheric pressure. During normal pumping operation this pump works like an ordinary centrifugal pump. See also Centrifugal compressor Axial flow pump Net positive suction head (NPSH) Pump Roots blower Seal (mechanical) Specific speed (Ns or Nss) Thermodynamic pump testing Turbine Turbopump References Sources ASME B73 Standards Committee, Chemical Standard Pumps External links Pumps Gas compressors Turbines Hydraulic engineering Power engineering Articles containing video clips
Centrifugal pump
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
2,919
[ "Pumps", "Hydrology", "Turbomachinery", "Gas compressors", "Turbines", "Energy engineering", "Physical systems", "Hydraulics", "Civil engineering", "Power engineering", "Electrical engineering", "Hydraulic engineering" ]
2,363,928
https://en.wikipedia.org/wiki/Algebra%20of%20physical%20space
In physics, the algebra of physical space (APS) is the use of the Clifford or geometric algebra Cl3,0(R) of the three-dimensional Euclidean space as a model for (3+1)-dimensional spacetime, representing a point in spacetime via a paravector (3-dimensional vector plus a 1-dimensional scalar). The Clifford algebra Cl3,0(R) has a faithful representation, generated by Pauli matrices, on the spin representation C2; further, Cl3,0(R) is isomorphic to the even subalgebra Cl(R) of the Clifford algebra Cl3,1(R). APS can be used to construct a compact, unified and geometrical formalism for both classical and quantum mechanics. APS should not be confused with spacetime algebra (STA), which concerns the Clifford algebra Cl1,3(R) of the four-dimensional Minkowski spacetime. Special relativity Spacetime position paravector In APS, the spacetime position is represented as the paravector where the time is given by the scalar part , and e1, e2, e3 is a basis for position space. Throughout, units such that are used, called natural units. In the Pauli matrix representation, the unit basis vectors are replaced by the Pauli matrices and the scalar part by the identity matrix. This means that the Pauli matrix representation of the space-time position is Lorentz transformations and rotors The restricted Lorentz transformations that preserve the direction of time and include rotations and boosts can be performed by an exponentiation of the spacetime rotation biparavector W In the matrix representation, the Lorentz rotor is seen to form an instance of the group (special linear group of degree 2 over the complex numbers), which is the double cover of the Lorentz group. The unimodularity of the Lorentz rotor is translated in the following condition in terms of the product of the Lorentz rotor with its Clifford conjugation This Lorentz rotor can be always decomposed in two factors, one Hermitian , and the other unitary , such that The unitary element R is called a rotor because this encodes rotations, and the Hermitian element B encodes boosts. Four-velocity paravector The four-velocity, also called proper velocity, is defined as the derivative of the spacetime position paravector with respect to proper time τ: This expression can be brought to a more compact form by defining the ordinary velocity as and recalling the definition of the gamma factor: so that the proper velocity is more compactly: The proper velocity is a positive unimodular paravector, which implies the following condition in terms of the Clifford conjugation The proper velocity transforms under the action of the Lorentz rotor L as Four-momentum paravector The four-momentum in APS can be obtained by multiplying the proper velocity with the mass as with the mass shell condition translated into Classical electrodynamics Electromagnetic field, potential, and current The electromagnetic field is represented as a bi-paravector F: with the Hermitian part representing the electric field E and the anti-Hermitian part representing the magnetic field B. In the standard Pauli matrix representation, the electromagnetic field is: The source of the field F is the electromagnetic four-current: where the scalar part equals the electric charge density ρ, and the vector part the electric current density j. Introducing the electromagnetic potential paravector defined as: in which the scalar part equals the electric potential ϕ, and the vector part the magnetic potential A. The electromagnetic field is then also: The field can be split into electric and magnetic components. Here, and F is invariant under a gauge transformation of the form where is a scalar field. The electromagnetic field is covariant under Lorentz transformations according to the law Maxwell's equations and the Lorentz force The Maxwell equations can be expressed in a single equation: where the overbar represents the Clifford conjugation. The Lorentz force equation takes the form Electromagnetic Lagrangian The electromagnetic Lagrangian is which is a real scalar invariant. Relativistic quantum mechanics The Dirac equation, for an electrically charged particle of mass m and charge e, takes the form: where e3 is an arbitrary unitary vector, and A is the electromagnetic paravector potential as above. The electromagnetic interaction has been included via minimal coupling in terms of the potential A. Classical spinor The differential equation of the Lorentz rotor that is consistent with the Lorentz force is such that the proper velocity is calculated as the Lorentz transformation of the proper velocity at rest which can be integrated to find the space-time trajectory with the additional use of See also Paravector Multivector wikibooks:Physics Using Geometric Algebra Dirac equation in the algebra of physical space Algebra References Textbooks Articles Mathematical physics Geometric algebra Clifford algebras Special relativity Electromagnetism
Algebra of physical space
[ "Physics", "Mathematics" ]
1,018
[ "Electromagnetism", "Physical phenomena", "Applied mathematics", "Theoretical physics", "Special relativity", "Fundamental interactions", "Theory of relativity", "Mathematical physics" ]
2,364,473
https://en.wikipedia.org/wiki/Natta%20projection
In chemistry, the Natta projection (named for Italian chemist Giulio Natta) is a way to depict molecules with complete stereochemistry in two dimensions in a skeletal formula. In a hydrocarbon molecule with all carbon atoms making up the backbone in a tetrahedral molecular geometry, the zigzag backbone is in the paper plane (chemical bonds depicted as solid line segments) with the substituents either sticking out of the paper toward the viewer (chemical bonds depicted as solid wedges) or away from the viewer (chemical bonds depicted as dashed wedges). The Natta projection is useful for representing the tacticity of a polymer. See also Structural formula Wedge-and-dash notation in skeletal formulas Haworth projection Newman projection Fischer projection References Eponymous diagrams of chemistry Stereochemistry
Natta projection
[ "Physics", "Chemistry" ]
161
[ "Stereochemistry", "Space", "Stereochemistry stubs", "nan", "Spacetime" ]
5,852,751
https://en.wikipedia.org/wiki/Grothendieck%20connection
In algebraic geometry and synthetic differential geometry, a Grothendieck connection is a way of viewing connections in terms of descent data from infinitesimal neighbourhoods of the diagonal. Introduction and motivation The Grothendieck connection is a generalization of the Gauss–Manin connection constructed in a manner analogous to that in which the Ehresmann connection generalizes the Koszul connection. The construction itself must satisfy a requirement of geometric invariance, which may be regarded as the analog of covariance for a wider class of structures including the schemes of algebraic geometry. Thus the connection in a certain sense must live in a natural sheaf on a Grothendieck topology. In this section, we discuss how to describe an Ehresmann connection in sheaf-theoretic terms as a Grothendieck connection. Let be a manifold and a surjective submersion, so that is a manifold fibred over Let be the first-order jet bundle of sections of This may be regarded as a bundle over or a bundle over the total space of With the latter interpretation, an Ehresmann connection is a section of the bundle (over ) The problem is thus to obtain an intrinsic description of the sheaf of sections of this vector bundle. Grothendieck's solution is to consider the diagonal embedding The sheaf of ideals of in consists of functions on which vanish along the diagonal. Much of the infinitesimal geometry of can be realized in terms of For instance, is the sheaf of sections of the cotangent bundle. One may define a first-order infinitesimal neighborhood of in to be the subscheme corresponding to the sheaf of ideals (See below for a coordinate description.) There are a pair of projections given by projection the respective factors of the Cartesian product, which restrict to give projections One may now form the pullback of the fibre space along one or the other of or In general, there is no canonical way to identify and with each other. A Grothendieck connection is a specified isomorphism between these two spaces. One may proceed to define curvature and p-curvature of a connection in the same language. See also References Osserman, B., "Connections, curvature, and p-curvature", preprint. Katz, N., "Nilpotent connections and the monodromy theorem", IHES Publ. Math. 39 (1970) 175–232. Connection (mathematics) Algebraic geometry
Grothendieck connection
[ "Mathematics" ]
513
[ "Fields of abstract algebra", "Algebraic geometry" ]
5,853,880
https://en.wikipedia.org/wiki/Bromocresol%20purple
Bromocresol purple (BCP) or 5′,5″-dibromo-o-cresolsulfophthalein, is a dye of the triphenylmethane family (triarylmethane dyes) and a pH indicator. It is colored yellow below pH 5.2, and violet above pH 6.8. In its cyclic sulfonate ester form, it has a pKa value of 6.3, and is usually prepared as a 0.04% aqueous solution. Uses Bromocresol purple is used in medical laboratories to measure albumin. Use of BCP in this application may provide some advantage over older methods using bromocresol green. In microbiology, it is used for staining dead cells based on their acidity, and for the isolation and assaying of lactic acid bacteria. In photographic processing, it can be used as an additive to acid stop baths to indicate that the bath has reached neutral pH and needs to be replaced. Bromocresol purple milk solids glucose agar is used as a medium used to distinguish dermatophytes from bacteria and other organisms in cases of ringworm fungus (T. verrucosum) infestation in cattle and other animals. pH Indicator Similar to bromocresol green, the structure of bromocresol purple changes with pH. Changing the level of acidity causes a shift in the equilibrium between two different structures that have different colors. In near-neutral or alkaline solution, the chemical has a sulfonate structure that gives the solution a purple color. As the pH decreases, it converts to a sultone (cyclic sulfonic ester) that colors the solution yellow. In some microbiology tests, this change is used as an indicator of bacterial growth. See also Metacresol purple References External links Material Safety Data Sheet PH indicators Triarylmethane dyes Phenol dyes Bromobenzene derivatives Benzoxathioles Purple
Bromocresol purple
[ "Chemistry", "Materials_science" ]
412
[ "Titration", "PH indicators", "Chromism", "Chemical tests", "Equilibrium chemistry" ]
5,854,024
https://en.wikipedia.org/wiki/Deoxy%20sugar
Deoxy sugars are sugars that have had a hydroxyl group replaced with a hydrogen atom. Examples include: Deoxyribose, or 2-deoxy-D-ribose, a constituent of DNA Fucose, or 6-deoxy-L-galactose, main component of fucoidan of brown algae, and present in N-linked glycans Fuculose, or 6-deoxy-L-tagatose, one of the important components of avian influenza virus particles Rhamnose, or 6-deoxy-L-mannose, present in plant glycosides In Escherichia coli bacteria, deoxyribose sugars are synthesized via two different pathways - one pathway involves aldol condensation, whereas the other pathway is conversion of a ribose sugar into a deoxyribose sugar by means of changes on the nucleotide or nucleoside level. Deoxyribose is synthesized through the reduction of ribose. Deoxyribose is derived from the same precursor as ribose being that the reduction of the sugar with the extra hydroxyl group results in the deoxy-sugar, which has its hydroxyl group replaced with a hydrogen atom. Dideoxy sugars Some biologically important dideoxy sugars, sugars that have had two hydroxyl groups replaced with hydrogen atoms, include colitose and abequose. See also Deoxynucleotide References External links Overview at qmul.ac.uk
Deoxy sugar
[ "Chemistry" ]
327
[ "Deoxy sugars", "Carbohydrates" ]
5,854,119
https://en.wikipedia.org/wiki/Indigo%20carmine
Indigo carmine, or 5,5′-indigodisulfonic acid sodium salt, is an organic salt derived from indigo by aromatic sulfonation, which renders the compound soluble in water. Like indigo, it produces a blue color, and is used in food and other consumables, cosmetics, and as a medical contrast agent and staining agent; it also acts as a pH indicator. It is approved for human consumption in the United States and European Union. It has the E number E132, and is named Blue No. 2 by the US Federal Food, Drug, and Cosmetic Act. Uses Indigo carmine in a 0.2% aqueous solution is blue at pH 11.4 and yellow at 13.0. Indigo carmine is also a redox indicator, turning yellow upon reduction. Another use is as a dissolved ozone indicator through the conversion to isatin-5-sulfonic acid. This reaction has been shown not to be specific to ozone: it also detects superoxide, an important distinction in cell physiology. It is also used as a dye in the manufacturing of capsules. Medical uses Indigotindisulfonate sodium, sold under the brand name Bludigo, is used as a contrast agent during surgical procedures. It is indicated for use in cystoscopy in adults following urological and gynecological procedures. It was approved for medical use in the United States in July 2022. In obstetric surgery, it may be used to detect amniotic fluid leaks. In urologic surgery, intravenous indigo carmine can be used to highlight portions of the urinary tract. The dye is filtered rapidly by the kidneys from the blood, and colors the urine blue. However, the dye can cause a potentially dangerous acute increase in blood pressure in some cases. Indigo carmine stain is not absorbed into cells, so it is applied to tissues to enhance the visibility of mucosa. This leads to its use for examination and diagnosis of benign and malignant lesions and growths on mucosal surfaces of the body. Food, pharmaceutical, cosmetic, and scientific uses Indigo carmine is one of the few blue food colorants. Others include the anthocyanidins and rare entites such as variagatic acid and popolohuanone. Safety and regulation Indigo carmine shows "genotoxicity, developmental toxicity or modifications of haematological parameters in chronic toxicity studies". Only at 17 mg/kg of body weight per day were effects on testes observed. References External links PH indicators Sulfonates Organic sodium salts Indoles Enones Food colorings Redox indicators E-number additives Acid dyes
Indigo carmine
[ "Chemistry", "Materials_science" ]
548
[ "Titration", "PH indicators", "Chromism", "Chemical tests", "Salts", "Redox indicators", "Organic sodium salts", "Electrochemistry", "Equilibrium chemistry" ]
5,854,506
https://en.wikipedia.org/wiki/Aspergillosis
Aspergillosis is a fungal infection of usually the lungs, caused by the genus Aspergillus, a common mould that is breathed in frequently from the air, but does not usually affect most people. It generally occurs in people with lung diseases such as asthma, cystic fibrosis or tuberculosis, or those who are immunocompromised such as those who have had a stem cell or organ transplant or those who take medications such as steroids and some cancer treatments which suppress the immune system. Rarely, it can affect skin. Aspergillosis occurs in humans, birds and other animals. Aspergillosis occurs in chronic or acute forms which are clinically very distinct. Most cases of acute aspergillosis occur in people with severely compromised immune systems such as those undergoing bone marrow transplantation. Chronic colonization or infection can cause complications in people with underlying respiratory illnesses, such as asthma, cystic fibrosis, sarcoidosis, tuberculosis, or chronic obstructive pulmonary disease. Most commonly, aspergillosis occurs in the form of chronic pulmonary aspergillosis (CPA), aspergilloma, or allergic bronchopulmonary aspergillosis (ABPA). Some forms are intertwined; for example ABPA and simple aspergilloma can progress to CPA. Other, noninvasive manifestations include fungal sinusitis (both allergic in nature and with established fungal balls), otomycosis (ear infection), keratitis (eye infection), and onychomycosis (nail infection). In most instances, these are less severe, and curable with effective antifungal treatment. The most frequently identified pathogens are Aspergillus fumigatus and Aspergillus flavus, ubiquitous organisms capable of living under extensive environmental stress. Most people are thought to inhale thousands of Aspergillus spores daily but without effect due to an efficient immune response. Invasive aspergillosis has a 20% mortality at 6 months. The major chronic, invasive, and allergic forms of aspergillosis account for around 600,000 deaths annually worldwide. Signs and symptoms A fungus ball in the lungs may cause no symptoms and may be discovered only with a chest X-ray, or it may cause repeated coughing up of blood, chest pain, and occasionally severe, even fatal, bleeding. A rapidly invasive Aspergillus infection in the lungs often causes cough, fever, chest pain, and difficulty breathing. Poorly controlled aspergillosis can disseminate through the blood to cause widespread organ damage. Symptoms include fever, chills, shock, delirium, seizures, and blood clots. The person may develop kidney failure, liver failure (causing jaundice), and breathing difficulties. Death can occur quickly. Aspergillosis of the ear canal causes itching and occasionally pain. Fluid draining overnight from the ear may leave a stain on the pillow. Aspergillosis of the sinuses causes a feeling of congestion and sometimes pain or discharge. It can extend beyond the sinuses. Cause Aspergillosis is caused by Aspergillus, a common mold, which tends to affect people who already have a lung disease such as cystic fibrosis or asthma, or who cannot fight infection themselves. The most common causative species is Aspergillus fumigatus. Risk factors People who are immunocompromised — such as patients undergoing hematopoietic stem cell transplantation, chemotherapy for leukaemia, or AIDS — are at an increased risk for invasive aspergillosis infections. These people may have neutropenia or corticosteroid-induced immunosuppression as a result of medical treatments. Neutropenia is often caused by extremely cytotoxic medications such as cyclophosphamide. Cyclophosphamide interferes with cellular replication including that of white blood cells such as neutrophils. A decreased neutrophil count inhibits the ability of the body to mount immune responses against pathogens. Although tumor necrosis factor alpha (TNF-α) — a signaling molecule related to acute inflammation responses — is produced, the abnormally low number of neutrophils present in neutropenic patients leads to a depressed inflammatory response. If the underlying neutropenia is not fixed, rapid and uncontrolled hyphal growth of the invasive fungi will occur and result in negative health outcomes. In addition to decreased neutrophil degranulation, the antiviral response against Flu and SARS-CoV-2 viruses, mediated by type I and type II interferon, is diminished jointly with the local antifungal immune response measured in the lungs of patients with IAPA (Influenza-Associated Pulmonary Aspergillosis) and CAPA (COVID-19-Associated Pulmonary Aspergillosis). COVID-19 patients with preexisting or newly diagnosed bronchiectasis are at particular risk of developing pulmonary aspergillosis. Normally the mucociliary clearance mechanism of the airways of the lungs removes inhaled particles. However, in those with underlying lung diseases, such as cystic fibrosis or bronchiectasis, this mucociliary clearance mechanism is impaired and aspergillus spores (which are 2-5 μm in diameter) are able to colonize the airways and sinuses. Diagnosis On chest X-ray and CT, pulmonary aspergillosis classically manifests as a halo sign, and later, an air crescent sign. In hematologic patients with invasive aspergillosis, the galactomannan test can make the diagnosis in a noninvasive way. Galactomannan is a component of the fungal wall. False-positive Aspergillus galactomannan tests have been found in patients on intravenous treatment with some antibiotics or fluids containing gluconate or citric acid such as some transfusion platelets, parenteral nutrition, or PlasmaLyte. On microscopy, Aspergillus species are reliably demonstrated by silver stains, e.g., Gridley stain or Gomori methenamine-silver. These give the fungal walls a gray-black colour. The hyphae of Aspergillus species range in diameter from 2.5 to 4.5 μm. They have septate hyphae, but these are not always apparent, and in such cases they may be mistaken for Zygomycota. Aspergillus hyphae tend to have dichotomous branching that is progressive and primarily at acute angles of around 45°. In those with suspected pulmonary aspergillosis, bronchoalveolar lavage can be done to collect fluid from the airways of the lungs for analysis. This involves exploration of the airway with a bronchoscope, then the introduction of a small amount of fluid into the airway which is then collected for analysis. This analysis can include assessment of cells, galactomannan testing, staining and fungal culture as well as polymerase chain reaction (PCR) assessment. PCR has an 85% sensitivity and 76% specificity for the diagnosis of aspergillosis, sensitivity is increased when the test is combined with galactomannan testing. Allergic bronchopulmonary aspergillosis (ABPA) is diagnosed by establishing an immune sensitization to aspergillosis, which can be done by measuring total eosinophils, total immunoglobulin E (IgE) and aspergillus specific IgE and IgG levels. Elevations of these markers, combined with clinical and radiologic findings suggesting infection are used to confirm ABPA. Prevention Prevention of aspergillosis involves a reduction of mold exposure via environmental infection-control. Antifungal prophylaxis can be given to high-risk patients. Posaconazole is often given as prophylaxis in severely immunocompromised patients. Screening A systematic review has evaluated the diagnostic accuracy of polymerase chain reaction (PCR) tests in people with defective immune systems from medical treatment such as chemotherapy. Evidence suggests PCR tests have moderate diagnostic accuracy when used for screening for invasive aspergillosis in high risk groups. CT and MRI are vital to diagnosis, however it is always highly recommended to undergo a biopsy of the area to confirm a diagnosis. Treatment The current medical treatments for aggressive invasive aspergillosis include voriconazole and liposomal amphotericin B in combination with surgical debridement. For the less aggressive allergic bronchopulmonary aspergillosis, findings suggest the use of oral steroids for a prolonged period of time, preferably for 6–9 months in allergic aspergillosis of the lungs. Itraconazole is given with the steroids, as it is considered to have a "steroid-sparing" effect, causing the steroids to be more effective, allowing a lower dose. Other drugs used, such as amphotericin B, caspofungin (in combination therapy only), flucytosine (in combination therapy only), or itraconazole, are used to treat this fungal infection. However, a growing proportion of infections are resistant to the triazoles. A. fumigatus, the most commonly infecting species, is intrinsically resistant to fluconazole. Epidemiology Aspergillosis is thought to affect more than 14 million people worldwide, with allergic bronchopulmonary aspergillosis (ABPA) infecting about 4 million, severe asthma with fungal sensitization affecting about 6.5 million, and chronic pulmonary aspergillosis (CPA) infecting about 3 million people, considerably more than invasive aspergillosis which affects about 300,000 people. Other common conditions include Aspergillus bronchitis, Aspergillus rhinosinusitis, or otitis externa. Society and culture During the COVID-19 pandemic 2020/21, COVID-19-associated pulmonary aspergillosis was reported in some people who had been admitted to hospital and received longterm steroid treatment. Animals While relatively rare in humans, aspergillosis is a common and dangerous infection in birds, particularly in pet parrots. Mallards and other ducks are particularly susceptible, as they often resort to poor food sources during bad weather. Captive raptors, such as falcons and hawks, are susceptible to this disease if they are kept in poor conditions and especially if they are fed pigeons, which are often carriers of "asper". It can be acute in chicks, but chronic in mature birds. In the United States, aspergillosis has been the culprit in several rapid die-offs among waterfowl. From 8 December until 14 December 2006, over 2,000 mallards died near Burley, Idaho, an agricultural community about 150 miles southeast of Boise. Mouldy waste grain from the farmland and feedlots in the area is the suspected source. A similar aspergillosis outbreak caused by mouldy grain killed 500 mallards in Iowa in 2005. While no connection has been found between aspergillosis and the H5N1 strain of avian influenza (commonly called "bird flu"), rapid die-offs caused by aspergillosis can spark fears of bird flu outbreaks. Laboratory analysis is the only way to distinguish bird flu from aspergillosis. In dogs, aspergillosis is an uncommon disease typically affecting only the nasal passages (nasal aspergillosis). This is much more common in dolicocephalic breeds. It can also spread to the rest of the body; this is termed disseminated aspergillosis and is rare, usually affecting individuals with underlying immune disorders. In 2019, an outbreak of aspergillosis struck the rare kākāpō, a large flightless parrot endemic to New Zealand. By June the disease had killed seven of the birds, whose total population at the time was only 142 adults and 72 chicks. One fifth of the population was infected with the disease and the entire species was considered at risk of extinction. See also Other ways in which aspergillus can cause disease in mammals: Primary cutaneous aspergillosis Aflatoxin References External links Aspergillosis, MedlinePlus, US National Library of Medicine Aspergillus & Aspergillosis Website National Aspergillosis Centre, Manchester, UK Animal fungal diseases Mycosis-related cutaneous conditions Poultry diseases Fungal diseases
Aspergillosis
[ "Biology" ]
2,620
[ "Fungi", "Fungal diseases" ]
5,855,043
https://en.wikipedia.org/wiki/METEOR
METEOR (Metric for Evaluation of Translation with Explicit ORdering) is a metric for the evaluation of machine translation output. The metric is based on the harmonic mean of unigram precision and recall, with recall weighted higher than precision. It also has several features that are not found in other metrics, such as stemming and synonymy matching, along with the standard exact word matching. The metric was designed to fix some of the problems found in the more popular BLEU metric, and also produce good correlation with human judgement at the sentence or segment level. This differs from the BLEU metric in that BLEU seeks correlation at the corpus level. Results have been presented which give correlation of up to 0.964 with human judgement at the corpus level, compared to BLEU's achievement of 0.817 on the same data set. At the sentence level, the maximum correlation with human judgement achieved was 0.403. Algorithm As with BLEU, the basic unit of evaluation is the sentence, the algorithm first creates an alignment (see illustrations) between two sentences, the candidate translation string, and the reference translation string. The alignment is a set of mappings between unigrams. A mapping can be thought of as a line between a unigram in one string, and a unigram in another string. The constraints are as follows; every unigram in the candidate translation must map to zero or one unigram in the reference. Mappings are selected to produce an alignment as defined above. If there are two alignments with the same number of mappings, the alignment is chosen with the fewest crosses, that is, with fewer intersections of two mappings. From the two alignments shown, alignment (a) would be selected at this point. Stages are run consecutively and each stage only adds to the alignment those unigrams which have not been matched in previous stages. Once the final alignment is computed, the score is computed as follows: Unigram precision is calculated as: Where is the number of unigrams in the candidate translation that are also found in the reference translation, and is the number of unigrams in the candidate translation. Unigram recall is computed as: Where is as above, and is the number of unigrams in the reference translation. Precision and recall are combined using the harmonic mean in the following fashion, with recall weighted 9 times more than precision: The measures that have been introduced so far only account for congruity with respect to single words but not with respect to larger segments that appear in both the reference and the candidate sentence. In order to take these into account, longer n-gram matches are used to compute a penalty for the alignment. The more mappings there are that are not adjacent in the reference and the candidate sentence, the higher the penalty will be. In order to compute this penalty, unigrams are grouped into the fewest possible chunks, where a chunk is defined as a set of unigrams that are adjacent in the hypothesis and in the reference. The longer the adjacent mappings between the candidate and the reference, the fewer chunks there are. A translation that is identical to the reference will give just one chunk. The penalty is computed as follows, Where c is the number of chunks, and is the number of unigrams that have been mapped. The final score for a segment is calculated as below. The penalty has the effect of reducing the by up to 50% if there are no bigram or longer matches. To calculate a score over a whole corpus, or collection of segments, the aggregate values for , and are taken and then combined using the same formula. The algorithm also works for comparing a candidate translation against more than one reference translations. In this case the algorithm compares the candidate against each of the references and selects the highest score. Examples See also BLEU F-Measure NIST (metric) ROUGE (metric) Word Error Rate (WER) LEPOR Noun-Phrase Chunking Notes Banerjee, S. and Lavie, A. (2005) References Banerjee, S. and Lavie, A. (2005) "METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments" in Proceedings of Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization at the 43rd Annual Meeting of the Association of Computational Linguistics (ACL-2005), Ann Arbor, Michigan, June 2005 Lavie, A., Sagae, K. and Jayaraman, S. (2004) "The Significance of Recall in Automatic Metrics for MT Evaluation" in Proceedings of AMTA 2004, Washington DC. September 2004 External links The METEOR Automatic Machine Translation Evaluation System (including link for download) Natural language processing Evaluation of machine translation
METEOR
[ "Technology" ]
983
[ "Natural language processing", "Natural language and computing" ]
16,731,070
https://en.wikipedia.org/wiki/HD%2097048
HD 97048 or CU Chamaeleontis is a Herbig Ae/Be star away in the constellation Chamaeleon. It is a variable star embedded in a dust cloud containing a stellar nursery, and is itself surrounded by a dust disk. HD 97048 is a young star still contracting towards the main sequence. Its brightness varies between magnitudes 8.38 and 8.48 and it is classified as an Orion variable. It was given the variable star designation CU Chamaeleontis in 1981. Its spectrum is also variable. The spectral class is usually given as A0 or B9, sometimes with a giant luminosity class, sometimes main sequence. The spectrum shows strong variable emission lines indicative of a shell surrounding the star. HD 97048 is a member of the Chamaeleon T1 stellar association and is still embedded within the dark molecular cloud that it is forming from. It illuminates a small reflection nebula against the dark cloud. Planetary system This star has a substantial dust disk having a central cavity with a 40−46 AU radius The disk has a carbon monoxide gas velocity kink and intensity gap at 130 AUs, which is suspected to be caused by a superjovian planet. In 2019, HCO+ ion and Hydrogen cyanide emission was detected from the disk, suggesting a large amount of gas is orbiting beyond 200 AU radius. In the system a kink in the velocity of carbon monoxide gas (CO 3–2) as well as a gap in the dust emission of the disk are seen as evidence for a jovian protoplanet. The protoplanet is located at 130 au from the star and has a mass of about 2.5 Jupiter masses. It is one of the lowest mass protoplanets discovered as of 2023. References Chamaeleon Circumstellar disks 097048 Chamaeleontis, CU 054413 CD-76 488 Herbig Ae/Be stars Hypothetical planetary systems
HD 97048
[ "Astronomy" ]
405
[ "Chamaeleon", "Constellations" ]
16,731,404
https://en.wikipedia.org/wiki/Hundred-year%20wave
A hundred-year wave is a statistically projected water wave, the height of which, on average, is met or exceeded once in a hundred years for a given location. The likelihood of this wave height being attained at least once in the hundred-year period is 63%. As a projection of the most extreme wave which can be expected to occur in a given body of water, the hundred-year wave is a factor commonly taken into consideration by designers of oil platforms and other offshore structures. Periods of time other than a hundred years may also be taken into account, resulting in, for instance, a fifty-year wave. Various methods are employed to predict the possible steepness and period of these waves, in addition to their height. See also Index of wave articles Significant wave height Shallow water equations Rogue wave References Physical oceanography Water waves
Hundred-year wave
[ "Physics", "Chemistry" ]
170
[ "Physical phenomena", "Applied and interdisciplinary physics", "Water waves", "Waves", "Physical oceanography", "Fluid dynamics" ]
16,731,978
https://en.wikipedia.org/wiki/Human%20skeletal%20changes%20due%20to%20bipedalism
The evolution of human bipedalism, which began in primates approximately four million years ago, or as early as seven million years ago with Sahelanthropus, or approximately twelve million years ago with Danuvius guggenmosi, has led to morphological alterations to the human skeleton including changes to the arrangement, shape, and size of the bones of the foot, hip, knee, leg, and the vertebral column. These changes allowed for the upright gait to be overall more energy efficient in comparison to quadrupeds. The evolutionary factors that produced these changes have been the subject of several theories that correspond with environmental changes on a global scale. Energy efficiency Human walking is about 75% less costly than both quadrupedal and bipedal walking in chimpanzees. Some hypotheses have supported that bipedalism increased the energetic efficiency of travel and that this was an important factor in the origin of bipedal locomotion. Humans save more energy than quadrupeds when walking but not when running. Human running is 75% less efficient than walking. A 1980 study reported that walking in living hominin bipeds is noticeably more efficient than walking in living hominin quadrupeds, but the costs of quadrupedal and bipedal travel are the same. Foot Human feet evolved enlarged heels. The human foot evolved as a platform to support the entire weight of the body, rather than acting as a grasping structure (like hands), as it did in early hominids. Humans therefore have smaller toes than their bipedal ancestors. This includes a non-opposable hallux, which is relocated in line with the other toes. The push off would also require all the toes to be slightly bent up. Humans have a foot arch rather than being flat footed. When non-human hominids walk upright, weight is transmitted from the heel, along the outside of the foot, and then through the middle toes while a human foot transmits weight from the heel, along the outside of the foot, across the ball of the foot and finally through the big toe. This transference of weight contributes to energy conservation during locomotion. The muscles that work along with the hallux has evolved to provide efficient push off. The long arch has also evolved to provide efficient push-off. The stiffening of the arch would be required of an upward gait, all considered that modern bipedalism does not include grasping of tree branches, which also explains the hallux evolving to line up with the rest of the toes. Knee Human knee joints are enlarged for the same reason as the hip – to better support an increased amount of body weight. The degree of knee extension (the angle between the thigh and shank in a walking cycle) has decreased. The changing pattern of the knee joint angle of humans shows a small extension peak, called the "double knee action," in the midstance phase. Double knee action decreases energy lost by vertical movement of the center of gravity. Humans walk with their knees kept straight and the thighs bent inward so that the knees are almost directly under the body, rather than out to the side, as is the case in ancestral hominids. This type of gait also aids balance. Limbs An increase in leg length since the evolution of bipedalism changed how leg muscles functioned in upright gait. In humans, the push for walking comes from the leg muscles acting at the ankle. A longer leg allows the use of the natural swing of the limb so that, when walking, humans do not need to use muscle to swing the other leg forward for the next step. As a consequence, since the human forelimbs are not needed for locomotion, they are instead optimized for carrying, holding, and manipulating objects with great precision. This results in decreased strength in the forelimbs relative to body size for humans compared to apes. Having long hind limbs and short forelimbs allows humans to walk upright, while orangutans and gibbons had the adaptation of longer arms to swing on branches. Apes can stand on their hindlimbs, but they cannot do so for long periods of time without getting tired. This is because their femurs are not adapted for bipedalism. Apes have vertical femurs, while humans have femurs that are slightly angled medially from the hip to the knee, thus making human knees closer together and under the body's center of gravity. This adaptation lets humans lock their knees and stand up straight for long periods of time without much effort from muscles. The gluteus maximus became a major role in walking and is one of the largest muscles in humans. This muscle is much smaller in chimps, which shows that it has an important role in bipedalism. When humans run, our upright posture tends to flex forward as each foot strikes the ground creating momentum forward. The gluteus muscle helps to prevent the upper trunk of the body from "pitching forward" or falling over. Hip and pelvis Modern human hip joints are larger than in quadrupedal ancestral species to better support the greater amount of body weight passing through them. They also have a shorter, broader shape. This alteration in shape brought the vertebral column closer to the hip joint, providing a stable base for support of the trunk while walking upright. Because bipedal walking requires humans to balance on a relatively unstable ball and socket joint, the placement of the vertebral column closer to the hip joint allows humans to invest less muscular effort in balancing. Change in the shape of the hip may have led to the decrease in the degree of hip extension, an energy efficient adaptation. The ilium changed from a long and narrow shape to a short and broad one and the walls of the pelvis modernized to face laterally. These combined changes provide increased area for the gluteus muscles to attach; this helps to stabilize the torso while standing on one leg. The sacrum has also become more broad, increasing the diameter of the birth canal and making birthing easier. To increase surface for ligament attachment to help support the abdominal viscera during erect posture, the ischial spines became more prominent and shifted towards the middle of the body. Vertebral column The vertebral column of humans takes a forward bend in the lumbar (lower) region and a backward bend in the thoracic (upper) region. Without the lumbar curve, the vertebral column would always lean forward, a position that requires much more muscular effort for bipedal animals. With a forward bend, humans use less muscular effort to stand and walk upright. Together the lumbar and thoracic curves bring the body's center of gravity directly over the feet. Specifically, the S-shaped curve in the spine brings the center of gravity closer to the hips by bringing the torso back. Balance of the whole vertebral column over the hip joints is a major contribution for efficient bipedalism. The degree of body erection (the angle of body incline to a vertical line in a walking cycle) is significantly smaller to conserve energy. The Angle of Sacral Incidence was a concept developed by G. Duval-Beaupère and his team at the University of René Descartes. It combines both the pelvic tilt and sacral slope to determine approximately how much lordosis is required for the upright gait to eliminate strain and fatigue on the torso. Lordosis, which the inward curvature of the spine, is normal for an upright gait as long as it is not too excessive or minimal. If the inward curvature of the spine is not enough, the center of balance would be offset causing the body to essentially tip forward, which is why some apes that have the ability to be bipedal require large amounts of energy to stand up. In addition to sacral angles, the sacrum has also evolved to be more flexible in comparison to the stiff sacrum that apes possess. Skull The human skull is balanced on the vertebral column. The foramen magnum is located inferiorly under the skull, which puts much of the weight of the head behind the spine. The flat human face helps to maintain balance on the occipital condyles. Because of this, the erect position of the head is possible without the prominent supraorbital ridges and the strong muscular attachments found in, for example, apes. As a result, in humans the muscles of the forehead (the occipitofrontalis) are only used for facial expressions. Increasing brain size has also been significant in human evolution. It began to increase approximately 2.4 million years ago, but modern levels of brain size were not attained until after 500,000 years ago. Zoological analyses have shown that the size of human brains is significantly larger than what anatomists would expect for their size. The human brain is three to four times larger than its closest relative, which is the chimpanzee. Significance Even with much modification, some features of the human skeleton remain poorly adapted to bipedalism, leading to negative implications prevalent in humans today. The lower back and knee joints are plagued by osteological malfunction, lower back pain being a leading cause of lost working days, because the joints support more weight. Arthritis has been an obstacle since hominids became bipedal: scientists have discovered its traces in the vertebrae of prehistoric hunter-gatherers. Physical constraints have made it difficult to modify the joints for further stability while maintaining efficiency of locomotion. There have been multiple theories as to why bipedalism was favored, thus leading to skeletal changes that aided the upward gait. The savannah hypothesis describes how the transition from arboreal habits to a savannah lifestyle favored an upright, bipedal gait. This would also change the diet of hominins, more specifically a shift from primarily plant-based to a higher protein, meat-based diet. This would eventually increase the size of the brain, changing the skeletal structure of the skull. Transitions from the forests to the savannah meant that sunlight and heat would require major changes in lifestyle. Being a biped on an open field is also an advantage because of heat dispersal. Walking upright reduces the amount of direct sun exposure and radiation in comparison to being a quadruped which have more body surface on top for the sun to hit. Increased capabilities of postural/locomotor neural control is hypothesis suggesting that the transition from quadrupedal to habitual upright bipedal locomotion was caused by qualitative changes in the nervous system that allowed controlling the more demanding type of posture/locomotion. Only after the more demanding posture was enabled by changes in the nervous system, could advantages of bipedal over quadrupedal locomotion be utilized, including better scanning of the environment, carrying food and infants, simultaneous upper extremity movements and observation of the environment, limitless manipulation of objects with upper extremities, and less space for rotating around the Z-axis. See also Happisburgh footprints Ileret footprints Laetoli footprints Obstetrical dilemma References Further reading External links Human Timeline (Interactive) – Smithsonian, National Museum of Natural History (August 2016). Biomechanics Human physiology Human evolution
Human skeletal changes due to bipedalism
[ "Physics" ]
2,315
[ "Biomechanics", "Mechanics" ]
16,733,372
https://en.wikipedia.org/wiki/Astro-comb
An astro-comb is a type of frequency comb used in observational astronomy to increase the accuracy of wavelength calibration in spectrographs. The increased accuracy reduces systematic errors and enhances detection of small variations in stellar radial velocities caused by smaller orbiting exoplanets (e.g. Earth-mass planets), among other applications. Astro-combs are distinguished from conventional frequency combs by their focus on high repetition frequencies (with mode spacings of ≥10 GHz). A green astro-comb was installed in January 2013 in the high accuracy radial velocity planet searcher in the northern hemisphere (HARPS-N) spectrograph at the Telescopio Nazionale Galileo on the Canary Islands. The device was developed by a team led by Chih-Hao Li of Harvard University. This astro-comb uses a pulsed laser to filter starlight before feeding the signal into a spectrograph. As of December 2016, it is gathering data from Venus to demonstrate its ability to discover exoplanets. See also Comb filter References New Approaches to Precision Astrophysical Spectroscopy The Walsworth Group, October 10, 2016. Accessed November 30, 2016. TNG, HARPS-N and Astro Comb ready to characterize the first earth twin Fundación Galileo Galilei - INAF Telescopio Nazionale Galileo, July 28, 2015. Accessed December 1, 2016 A green astro-comb to search for Earth-like planets Chih-Hao Li, SPIE, January 29, 2015. Accessed November 30, 2016. 'Astro-comb' helps search for Goldilocks planet Physorg.com, April 2, 2008. Accessed April 2, 2008. Astro-combing for Planets Astro Biology Magazine, April 9, 2008. Accessed April 11, 2008. Pulses to Find Planets, (Archive of Pulses to Find Planets Astro Biology Magazine, May 11, 2008. Accessed May 15, 2008. Based on a National Institute of Standards and Technology news release.) Astronomical spectroscopy
Astro-comb
[ "Physics", "Chemistry", "Astronomy" ]
411
[ "Spectroscopy stubs", "Spectrum (physical sciences)", "Astronomy stubs", "Astrophysics", "Astrophysics stubs", "Astronomical spectroscopy", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs" ]
16,739,998
https://en.wikipedia.org/wiki/Q-machine
A Q-machine is a device that is used in experimental plasma physics. The name Q-machine stems from the original intention of creating a quiescent plasma that is free from the fluctuations that are present in plasmas created in electric discharges. The Q-machine was first described in a publication by Rynn and D'Angelo. The Q-machine plasma is created at a plate that has been heated to about 2000 K and hence is called the hot plate. Electrons are emitted by the hot plate through thermionic emission, and ions are created through contact ionization of atoms of alkali metals that have low ionization potentials. The hot plate is made of a metal that has a large work function and can withstand high temperatures, e.g. tungsten or rhenium. The alkali metal is boiled in an oven that is designed to direct a beam of alkaline metal vapor onto the hot plate. A high value of the hot plate work function and a low ionization potential of the metal makes for a low potential barrier for an electron in the alkaline metal to overcome, thus making the ionization process more efficient. Sometimes barium is used instead of an alkaline metal due to its excellent spectroscopic properties. The fractional ionization of a Q-machine plasma can approach unity, which can be orders of magnitude greater than that predicted by the Saha ionization equation. The temperature of the Q-machine plasma is close to the temperature of the hot plate, and the ion and electron temperatures are similar. Although this temperature (about 2000 K) is high compared to room temperature, it is much lower than electron temperatures that are usually found in discharge plasma. The low temperature makes it possible to create a plasma column that is several ion gyro radii across. Since the alkaline metals are solids at room temperature they will stick to the walls of the machine on impact, and therefore the neutral pressure can be kept so low that for all practical purposes the plasma is fully ionised. Plasma research that has been performed using Q-machines includes current driven ion cyclotron waves, Kelvin-Helmholtz waves, and electron phase space holes. Today, Q-machines can be found at West Virginia University and at the University of Iowa in the USA, at Tohoku University in Sendai in Japan, and at the University of Innsbruck in Austria. References Plasma technology and applications
Q-machine
[ "Physics" ]
493
[ "Plasma technology and applications", "Plasma physics" ]
16,740,988
https://en.wikipedia.org/wiki/Hempcrete
Hempcrete or hemplime is biocomposite material, a mixture of hemp hurds (shives) and lime, sand, or pozzolans, which is used as a material for construction and insulation. It is marketed under names like Hempcrete, Canobiote, Canosmose, Isochanvre, and IsoHemp. Hempcrete is easier to work with than traditional lime mixes and acts as an insulator and moisture regulator. It lacks the brittleness of concrete and consequently does not need expansion joints. Typically, hempcrete has good thermal and acoustic insulation capabilities, but low mechanical performance, specifically compressive strength. When used in prefabricated blocks, hempcrete acts as a carbon sink throughout its lifetime. The result is a lightweight, insulating material, finishing plaster, or a non-load bearing wall, ideal for most climates, since it combines insulation and thermal mass while providing a positive impact on the environment. Mixture of materials Hempcrete is made of the inner woody core of the hemp plant (hemp shives), a lime-based binder, and water. The binder consists of either hydrated lime, or natural hydraulic lime. Hydrated lime is made from pure limestone and sets through the absorption of CO2 during the carbonation process. When dealing with time constraints, hydraulic binders are used in combination with regular hydrated lime, because the set time for hempcrete will be less than that of regular limes (e.g., about two weeks to a month, to gain adequate strength). A small amount of cement and/or pozzolanic binder is added to speed up the setting time. The overall process creates a mixture that will develop into a solid, light, and durable product. Applications Hempcrete has been used in France since the early 1990s, and more recently in Canada, to construct non-weight bearing insulating infill walls, as hempcrete does not have the requisite strength for constructing foundation and is instead supported by the frame. Hempcrete was also used to renovate old buildings made of stone or lime. France continues to be an avid user of hempcrete, and it grows in popularity there annually. Canada has followed France's direction in the organic building technologies sector, and hempcrete has become a growing innovation in Ontario and Quebec. There are two primary construction techniques used right now for implementing hempcrete. The first technique consists of using forms to cast or spray hempcrete directly in place on the construction site. The second technique consists of stacking prefabricated blocks that are delivered to the project site similar to masonry construction. Once hempcrete technology is implemented between timber framing, drywall or plaster is added for aesthetics and increased durability. Hempcrete can be used for a number of purposes in buildings, including roof, wall, slab, and render insulation, each of which has its own formulation and dosages of the various constituents respectively. Properties Mechanical properties Typically, hempcrete has a low mechanical performance. Hempcrete is a fairly new material and is still being studied. Several items affect the mechanical properties of hempcrete such as aggregate size, type of binder, proportions within the mixture, manufacturing method, molding method, and compaction energy. All studies show variability within hempcrete properties and determine that it is sensitive to many factors. A study was conducted that focuses on the variability and statistical significance of hempcrete properties by analyzing two sizes of hempcrete columns with hemp from two different distributors under a normal distribution. The coefficient of variance (COV) indicates the dispersion of experimental results and is important in understanding the variability among hempcrete properties. Young's modulus continually has a high COV across multiple experiments. The Young's modulus of hempcrete is 22.5 MPA. Young's modulus and compressive strength are two mechanical properties that are correlated. The compressive strength is typically around 0.3 MPA. Due to the lower compressive strength, hempcrete cannot be used for load-bearing elements in construction. Density is affected by drying kinetics, with a larger specific area the drying time decreases. The size of the specimen and the hemp shives should be accounted for when determining the density. In the model, the density of hempcrete is 415 kg/m3 with an average coefficient of variance (COV) of 6.4%. Hempcrete's low density material and resistance to cracking under movement make it suitable for use in earthquake-prone areas. Hempcrete walls must be used together with a frame of another material that supports the vertical load in building construction, as hempcrete's density is 15% that of traditional concrete. Studies in the UK indicate that the performance gain between and walls is insignificant. Hempcrete walls are fireproof, transmit humidity, resist mould, and have excellent acoustic performance. Limecrete, Ltd. (UK) reports a fire resistance rating of 1 hour per British/EU standards. Thermal properties Hempcrete's R-value (its resistance to heat transfer) can range from to , making it an efficient insulating material (the higher the R-value, the better the insulation). The porosity of hempcrete falls within the range of 71.1% to 84.3% by volume. The average specific heat capacity of the hempcrete ranges from 1000 to 1700 J/(kg⋅K). The dry thermal conductivity of hempcrete ranges from 0.05 to 0.138 W/(m⋅K). The low thermal diffusivity () and effusivity [286 J/(m2⋅K⋅s−1/2)] of hempcrete reduce the ability of hempcrete to activate the thermal mass. Hemp concrete has a low thermal conductivity, ranging from 0.06 to 0.6 W m−1 K−1, a total porosity of 68–80% and a density of 200 kg /m3 to 960 kg/m3. Hemp concrete is also an aerated material with high water vapour permeability and its total porosity very close to open porosity allowing it to absorb significant amounts of water. The water vapour diffusion resistance of hemp concrete ranges from 5 to 25. Furthermore, between 2 and 4.3 g/ (m2%RH), it is considered an excellent moisture regulator. It can absorb relative humidity when there is a surplus in the living environment and release it when there is a deficit. It is important to note that these properties depend on the composition of the material, the type of binder, temperature and humidity. Due to its latent heating effects, which are the results of its high thermal ability and comprehensive moisture control, hemp concrete exhibits phase change material properties. Due to the large variety of hemp, the porosity differs from one type to another, therefore its thermal insulating abilities vary too. The lower the density, the lower the heat transfer coefficient, a characteristic of insulating materials. On three cubic samples of hempcrete after 28 days of drying the heat transfer coefficient was measured using ISOMET 2114, a portable system for measuring the heat transfer of properties. Hempcrete has a coefficient of heat transfer of 0.0652 W/(m⋅K) and a specific weight of 296 kg/m3. Attention should be paid to mixing the hempcrete, as it influences the properties of the material. Further testing needs to be conducted in correlation to specimen size to determine the influence that size has on the properties of hempcrete. Other In the United States, a permit is needed for the use of hemp in building. Hempcrete has a high silica content, which makes it more resistant to biological degradation than other plant products. Benefits and constraints Hempcrete materials are a product of a type of binder and hemp shives size and quality, and the proportions in the mixture can greatly affect its properties and performance. The most notable limiting factor with hempcrete is the low mechanical performance. Due to low mechanical performance, the material should not be used for load-bearing structures. Although it is not known for its strength, hempcrete provides a high vapor permeability that allows for better control of temperature in an indoor environment. It can also be used as a filling material in frame structures and be used to make prefabricated panels. Altering the density of hempcrete mixtures also affects its use. Higher-density hempcrete mixtures are used for floor and roof insulation, while lower-density mixtures are used for indoor insulation and outdoor plasters. Hempcrete block walls can be laid without any covering or can be covered with finishing plasters. This latter uses the same hempcrete mixture but in different proportions. Since hempcrete contains a plant-based compound, walls need to be built with a joint in between the wall and ground to prevent capillary rising of water and runoff, blocks need to be installed above ground level and exterior walls should be protected with sand and plasters to avoid rotting shives. Life cycle analysis Just like any crop, hemp absorbs CO2 from the atmosphere while growing, so hempcrete is considered a carbon-storing material. A hempcrete block continually stores CO2 during its entire life, from fabrication to end-of-life, creating positive environmental benefits. Through a life cycle assessment (LCA) of hempcrete blocks using research and X-ray Powder Diffraction (XRPD), it was found that the blocks store a large quantity of carbon from photosynthesis during plant growth and by carbonation during the use phase of the blocks. The LCA of hempcrete blocks considers seven unit processes: hemp shives and production, binder production, transport of raw materials to the manufacturing company, hempcrete blocks production processes, transport of hempcrete blocks to the construction site, wall construction, and the use phase. The impact assessment of each process was analyzed using the following impact categories: abiotic depletion (ADP), fossil fuel depletion (ADP Fossil), global warming over a time interval of 100 years (GWP), ozone depletion (ODP), acidification (AP), eutrophication (EP), and photochemical ozone creation (POCP). The binder production provides the largest environmental impact while the transport phases are the second. During binder production in the lime calcination and clinker creation portion, the emissions are the most notable. A large amount of diesel consumption in the transport phases and during the manufacturing of hemp shives created a large portion of the cumulative energy demand and along with the calcination of lime which takes place in kilns, is a main source of fossil fuel emissions. Abiotic depletion is mostly attributed to the electricity used during binder production and although minimal, also during the block production processes. It is important to focus on the water content in a hempcrete mixture, because too much water can cause slow drying and create a negative impact, preventing lime carbonation. The main cause of the environmental footprint for hempcrete comes from the production of the binder. Reports have estimated that 18.5% - 38.4% of initial emissions from binder production can be recovered through the carbonation process. The specific amount of carbonates in the blocks actually increases with the age of the block. During the growth of hemp the plant absorbs , the binder begins to absorb after the mixing process, and the wall absorbs counteracting the greenhouse emissions, by acting as a carbon sink. A hempcrete block will continue to store carbon throughout its life and can be crushed and used again as a filler for insulation. The amount of capture within the net life cycle emissions of hempcrete is estimated to be between -1.6 to -79 kg e/m2. There is a correlation that increasing the mass of the binder which increases the mixture density will increase the total estimated carbon uptake via carbonation. The impacts arising from indirect land use changes of hemp cultivation, maintenance work, and end-of-life need to be studied to create a full cradle-to-grave environmental impact profile of hempcrete blocks. To counteract the negative environmental impacts that hempcrete blocks have on the environment the transport distances should be shortened as much as possible. Since hempcrete is not typically load-bearing, ratios should be explored to possibly completely remove the cement from the mixture. Summary Hempcrete is a fairly new natural building material whose usage has increased throughout European countries in recent years and is gaining traction within the United States. The Hemp Building Foundation submitted paperwork to the International Residential Codes (IRC) in February 2022 to certify the material as a national building material, allowing the construction industry to gain more familiarity with the material. Hempcrete is a construction building material that uses hemp shives, aggregate, water, and a type of binder to act as non-bearing walls, insulators, finishing plasters, and blocks. The material has low mechanical properties and low thermal conductivity, making it ideal for insulation material. Hempcrete blocks have a low carbon footprint and are effectively carbon sinks. Widespread codes and specifications still need to be developed for the widespread usage of hempcrete, but it shows promise to replace current non-bearing construction materials that negatively impact the environment. See also Fiber-reinforced concrete Hemp as a building material References Further reading External links Hemcrete application data from Limetechnology Appropriate technology Building materials Composite materials Concrete Energy conservation Heat transfer Hemp products Insulators Masonry Sustainable building Thermal protection
Hempcrete
[ "Physics", "Chemistry", "Engineering" ]
2,896
[ "Transport phenomena", "Sustainable building", "Physical phenomena", "Heat transfer", "Structural engineering", "Masonry", "Building engineering", "Composite materials", "Architecture", "Construction", "Materials", "Thermodynamics", "Concrete", "Matter", "Building materials" ]
16,743,556
https://en.wikipedia.org/wiki/Ontology%20for%20Biomedical%20Investigations
The Ontology for Biomedical Investigations (OBI) is an open-access, integrated ontology for the description of biological and clinical investigations. OBI provides a model for the design of an investigation, the protocols and instrumentation used, the materials used, the data generated and the type of analysis performed on it. The project is being developed as part of the OBO Foundry and as such adheres to all the principles therein such as orthogonal coverage (i.e. clear delineation from other foundry member ontologies) and the use of a common formal language. In OBI the common formal language used is the Web Ontology Language (OWL). As of March 2008, a pre-release version of the ontology was made available at the project's SVN repository. Scope The Ontology for Biomedical Investigations (OBI) addresses the need for controlled vocabularies to support integration and joint ("cross-omics") analysis of experimental data, a need originally identified in the transcriptomics domain by the FGED Society, which developed the MGED Ontology as an annotation resource for microarray data. OBI uses the basic formal ontology upper-level ontology as a means of describing general entities that do not belong to a specific problem domain. As such, all OBI classes are a subclass of some BFO class. The ontology has the scope of modeling all biomedical investigations and as such contains ontology terms for aspects such as: biological material – for example blood plasma instrument (and parts of an instrument therein) – for example DNA microarray, centrifuge information content – such as an image or a digital information entity such as an electronic medical record design and execution of an investigation (and individual experiments therein) – for example study design, electrophoresis material separation data transformation (incorporating aspects such as data normalization and data analysis) – for example principal components analysis dimensionality reduction, mean calculation Less 'concrete' aspects such as the role a given entity may play in a particular scenario (for example the role of a chemical compound in an experiment) and the function of an entity (for example the digestive function of the stomach to nutriate the body) are also covered in the ontology. OBI consortium The MGED Ontology was originally identified in the transcriptomics domain by the FGED Society and was developed to address the needs of data integration. Following a mutual decision to collaborate, this effort later became a wider collaboration between groups such as FGED, PSI and MSI in response to the needs of areas such as transcriptomics, proteomics and metabolomics and the FuGO (Functional Genomics Investigation Ontology) was created. This later became the OBI covering the wider scope of all biomedical investigations. As an international, cross-domain initiative, the OBI consortium draws upon a pool of experts from a variety of fields, not limited to biology. The current list of OBI consortium members is available at the OBI consortium website. The consortium is made up of a coordinating committee which is a combination of two subgroups, the Community Representative (those representing a particular biomedical community) and the Core Developers (ontology developers who may or may not be members of any single community). Separate to the coordinating committee is the Developers Working Group which consists of developers within the communities collaborating in the development of OBI at the discretion of current OBI Consortium members. Papers on OBI References External links Knowledge engineering Technical communication Information science Semantic Web Ontology (information science) Bioinformatics
Ontology for Biomedical Investigations
[ "Engineering", "Biology" ]
724
[ "Bioinformatics", "Systems engineering", "Biological engineering", "Knowledge engineering" ]
16,743,975
https://en.wikipedia.org/wiki/Membrane%20bioreactor
Membrane bioreactors are combinations of membrane processes like microfiltration or ultrafiltration with a biological wastewater treatment process, the activated sludge process. These technologies are now widely used for municipal and industrial wastewater treatment. The two basic membrane bioreactor configurations are the submerged membrane bioreactor and the side stream membrane bioreactor. In the submerged configuration, the membrane is located inside the biological reactor and submerged in the wastewater, while in a side stream membrane bioreactor, the membrane is located outside the reactor as an additional step after biological treatment. Overview Water scarcity has prompted efforts to reuse waste water once it has been properly treated, known as "water reclamation" (also called wastewater reuse, water reuse, or water recycling). Among the treatment technologies available to reclaim wastewater, membrane processes stand out for their capacity to retain solids and salts and even to disinfect water, producing water suitable for reuse in irrigation and other applications. A semipermeable membrane is a material that allows the selective flow of certain substances. In the case of water purification or regeneration, the aim is to allow the water to flow through the membrane whilst retaining undesirable particles on the originating side. By varying the type of membrane, it is possible to get better pollutant retention of different kinds. Some of the required characteristics in a membrane for wastewater treatment are chemical and mechanical resistance for five years of operation and capacity to operate stably over a wide pH range. There are two main types of membrane materials available on the market: organic-based polymeric membranes and ceramic membranes. Polymeric membranes are the most commonly used materials in water and wastewater treatment. In particular, polyvinylidene difluoride (PVDF) is the most prevalent material due to its long lifetime and chemical and mechanical resistance. When used with domestic wastewater, membrane bioreactor processes can produce effluent of high enough quality for discharge into the oceans, surfaces, brackish bodies, or urban irrigation waterways. Other advantages of membrane bioreactors over conventional processes include reduced footprints and simpler retrofitting. It is possible to operate membrane bioreactor processes at higher mixed liquor suspended solids concentrations compared to conventional settlement separation systems, thus reducing the reactor volume to achieve the same loading rate. Recent technical innovation and significant membrane cost reduction have enabled membrane bioreactors to become an established process option to treat wastewater. Membrane bioreactors have become an attractive option for the treatment and reuse of industrial and municipal wastewater, as evidenced by their consistently rising numbers and capacity. The current membrane bioreactor market was estimated to be worth around US $216 million in 2006 and US$838.2 million in 2011, grounding projections that the market for membrane bioreactors was growing at an average rate of 22.4% and would reach a market size of US $3.44 billion in 2018. The global membrane bioreactor market is expected to grow in the near future due to various driving forces, for instance increasing scarcity of water worldwide which makes wastewater reclamation more profitable; this will likely be further aggravated by continuing climate change. Growing environmental concerns over industrial wastewater disposal along with declining freshwater resources across developing economies also account for increasing demand for membrane bioreactor technology. Population growth, urbanization, and industrialization will further complicate the business outlook. However, high initial investments and operational expenditure may hamper the global membrane bioreactor market. In addition, technological limitations, particularly the recurrent costs of membrane fouling, are likely to hinder production adoption. Ongoing research and development progress toward increasing output and minimizing sludge formation are anticipated to fuel industry growth. Membrane bioreactors can be used to reduce the footprint of an activated sludge sewage treatment system by removing some of the liquid components of the mixed liquor. This leaves a concentrated waste product that is then treated using the activated sludge process. Recent studies show the opportunity to use nanomaterials for the realization of more efficient and sustainable membrane bioreactors for wastewater treatment. History and basic operating parameters Membrane bioreactors were introduced in the late 1960s, shortly after commercial-scale ultrafiltration and microfiltration membranes became available. The original designs were introduced by Dorr-Oliver Inc. and combined the use of an activated sludge bioreactor with a cross-flow membrane filtration loop. The flat sheet membranes used in this process were polymeric and featured pore sizes ranging from 0.003 to 0.01 μm. Although the idea of replacing the settling tank of the conventional activated sludge process was attractive, it was difficult to justify the use of such a process because of the high cost of membranes, the low economic value of the product (tertiary effluent) and sometimes rapid losses of performance due to membrane fouling. As a result, the initial design focus was on the attainment of high fluxes, and it was, therefore, necessary to pump the mixed liquor and its suspended solids at high cross-flow velocity at significant energy demand (of the order 10 kWh/m3 product) to reduce fouling. Because of the poor economics of the first-generation devices, they only found applications in niche areas with special needs such as isolated trailer parks or ski resorts. The next breakthrough for the membrane bioreactor came in 1989 with the introduction of submerged membrane bioreactor configurations. Until then, membrane bioreactors were designed with a separation device located external to the reactor (side stream membrane bioreactors) and relied on high trans-membrane pressure to maintain filtration. The submerged configuration takes advantage of coarse bubble aeration to produce mixing and limit fouling. The energy demand of the submerged system can be up to 2 orders of magnitude lower than that of the side stream systems and submerged systems operate at a lower flux, demanding more membrane area. In submerged configurations, aeration is considered as one of the major parameters in process performance both hydraulic and biological. Aeration maintains solids in suspension, scours the membrane surface, and provides oxygen to the biomass, leading to better biodegradability and cell synthesis. Submerged membrane bioreactor systems became preferred to side stream configurations, especially for domestic wastewater treatment. The next key steps in membrane bioreactor development were the acceptance of modest fluxes (25 percent or less of those in the first generation) and the idea to use two-phase (bubbly) flow to control fouling. The lower operating cost obtained with the submerged configuration along with the steady decrease in the membrane cost led to an exponential increase in membrane bioreactor plant installations from the mid-1990s. Since then, further improvements in membrane bioreactor design and operation have been introduced and incorporated into larger plants. While earlier devices were operated at solid retention times as high as 100 days with mixed liquor suspended solids up to 30 g/L, the recent trend is to apply lower solid retention times (around 10–20 days), resulting in more manageable suspended solids levels (10 to 15 g/L). Thanks to these new operating conditions, the oxygen transfer and the pumping cost in the reactors have tended to decrease and the overall maintenance has been simplified. There is now a range of membrane bioreactor systems available commercially, most of which use submerged membranes although some side stream modules are available; these side stream systems also use two-phase flow for fouling control. Typical hydraulic retention times range between 3 and 10 hours. For the most part, hollow fiber and flat sheet membrane configurations are utilized in membrane bioreactor applications. Despite the more favorable energy usage of submerged membranes, there continued to be a market for the side stream configuration, particularly in smaller flow industrial applications. For ease of maintenance, side stream configurations can be installed on a lower level in a plant building, and thus membrane replacement can be undertaken without specialized lifting equipment. As a result, research and development has continued to improve the side stream configurations, and this has culminated in recent years with the development of low energy systems which incorporate more sophisticated control of the operating parameters coupled with periodic backwashes, which enable sustainable operation at energy usage as low as 0.3 kWh/m3 of product. Configurations Internal/submerged/immersed In the immersed Membrane Bioreactor (iMBR) configuration, the filtration element is installed in either the main bioreactor vessel or in a separate tank. The modules are positioned above the aeration system, fulfilling two functions, the supply of oxygen and the cleaning of the membranes. The membranes can be a flat sheet or tubular or a combination of both and can incorporate an online backwash system which reduces membrane surface fouling by pumping membrane permeate back through the membrane. In systems where the membranes are in a separate tank from the bioreactor, individual trains of membranes can be isolated to undertake cleaning regimes incorporating membrane soaks, however, the biomass must be continuously pumped back to the main reactor to limit mixed liquor suspended solids concentration increases. Additional aeration is also required to provide air scouring to reduce fouling. Where the membranes are installed in the main reactor, membrane modules are removed from the vessel and transferred to an offline cleaning tank. Usually, the internal/submerged configuration is used for larger-scale lower strength applications. To optimize the reactor volume and minimize the production of sludge, submerged membrane bioreactor systems typically operate with mixed liquor suspended solids concentrations comprised between 12000 mg/L and 20000 mg/L, hence they offer good flexibility in the selection of the design Sludge retention time. It is mandatory to take into account that an excessively high content of mixed liquor suspended solids may render the aeration system less effective; the classical solution to this optimization problem is to ensure a concentration of mixed liquor suspended solids which approaches 10.000 mg/L to guarantee a good mass transfer of oxygen with a good permeation flux. This type of solution is widely accepted in larger-scale units, where the internal/submerged configuration is typically used, because of the higher relative cost of the membrane compared to the additional tank volume required. Immersed MBR has been the preferred configuration due to its low energy consumption level, high biodegradation efficiency, and low fouling rate compared to side stream membrane bioreactors. In addition, iMBR systems can handle higher suspended solids concentrations, while traditional systems work only with suspended solids concentrations between 2.5-3.5, iMBR can handle concentrations between 4-12 g/L, an increase in range of 300%. This type of configuration is adopted in industrial sectors including textile, food & beverage, oil & gas, mining, power generation, pulp & paper. External/side stream In side stream membrane bioreactor technology, the filtration modules are outside the aerobic tank, hence the name side-stream configuration. Like the immersed or submerged configuration, the aeration system is also used to clean and supply oxygen to the bacteria that degrade the organic compounds. The biomass is either pumped directly through several membrane modules in series and back to the bioreactor or the biomass is pumped to a bank of modules, from which a second pump circulates the biomass through the modules in series. Cleaning and soaking of the membranes can be undertaken in situ with the use of an installed cleaning tank, pump, and pipework. The quality of the final product is such that it can be reused in process applications due to the filtration capacity of the micro- and ultrafiltration membranes. Usually, the external/side stream configuration is used for smaller scale and higher strength applications; the main advantage that the external/side stream configuration shows is the possibility to design and size the tank and the membrane separately, with practical advantages for the operation and the maintenance of the unit. As in other membrane processes, a shear over the membrane surface is needed to prevent or limit fouling; the external/side stream configuration provides this shear using a pumping system, while the internal/submerged configuration provides the shear through aeration in the bioreactor, and there is an energy requirement to promote the shear by pumping. In this configuration fouling is more consistent due to the higher fluxes involved. Major considerations Fouling and fouling control Membrane bioreactor filtration performance inevitably decreases with filtration time due to the deposition of soluble and particulate materials onto and into the membrane, attributable to the interactions between activated sludge components and the membrane. This major drawback and process limitation has been under investigation since the earliest membrane bioreactors and remains one of the most challenging issues facing further development. Fouling is the process by which the particles (colloidal particles, solute macromolecules) are deposited or adsorbed onto the membrane surface or pores by physical and chemical interactions or mechanical action. This produces a reduction in size or blockage of membrane pores. Membrane fouling can cause severe flux drops and affects the quality of the water produced. Severe fouling may require intense chemical cleaning or membrane replacement. This increases the operating costs of a treatment plant. Membrane fouling has traditionally been thought to occur through four mechanisms: 1) complete pore blocking, 2) standard blocking, 3) intermediate blocking, and 4) cake layer formation. There are various types of foulants: biological (bacteria, fungi), colloidal (clays, flocs), scaling (mineral precipitates), and organic (oils, polyelectrolytes, (humics). Membrane fouling can be accommodated either by allowing a decrease in permeation flux while holding transmembrane pressure constant or by increasing transmembrane pressure to maintain constant flux. Most wastewater treatment plants are operated in constant flux mode, and hence fouling phenomena are generally tracked via the variation of transmembrane pressure with time. In recent reviews covering membrane applications to bioreactors, it has been shown that, as with other membrane separation processes, membrane fouling is the most serious problem affecting system performance. Fouling leads to a significant increase in hydraulic resistance, manifested as permeate flux declines or transmembrane pressure increases when the process is operated under constant-transmembrane-pressure or constant-flux conditions respectively. In systems where flux is maintained by increasing transmembrane pressure, the energy required to achieve filtration increases. Frequent membrane cleaning is an alternative that significantly increases operating costs as a result of added cleaning agent costs, added production downtime, and more frequent membrane replacement. Membrane fouling results from the interaction between a membrane material and the components of the activated sludge liquor, which include biological flocs formed by a large range of living or dead microorganisms along with soluble and colloidal compounds. The suspended biomass has no fixed composition and varies with feed water composition and reactor operating conditions. Thus, though many investigations of membrane fouling have been published, the diverse range of operating conditions and feedwater matrices employed, the different analytical methods used, and the limited information reported in most studies on the suspended biomass composition, have made it difficult to establish any generic behavior pertaining to membrane fouling in membrane bioreactors specifically. Air-induced cross flow in submerged membrane bioreactors can efficiently remove or at least reduce the fouling layer on the membrane surface. A recent review reports the latest findings on applications of aeration in submerged membrane configuration and describes the performance benefits of gas bubbling. The choice of aeration rate is a key parameter in submerged membrane bioreactor design, as there is generally an optimal air flow rate beyond which further increases in aeration have no benefits for preventing fouling. Many other antifouling strategies can be applied in membrane bioreactor applications. They include, for example: Intermittent permeation or relaxation, where the filtration is stopped at regular time intervals before being resumed. Particles deposited on the membrane surface tend to diffuse back to the reactor; this phenomenon will be increased by the continuous aeration applied during this resting period. Membrane backwashing, where permeate water is pumped back to the membrane and flows through the pores to the feed channel, dislodging internal and external foulants. Air backwashing, where pressurized air in the membrane's permeate side builds up and releases a significant pressure within a very short period of time. Membrane modules, therefore, need to be in a pressurized vessel coupled to a vent system. Air usually does not go through the membrane. If it did, the air would dry the membrane and a re-wet step would be necessary, accomplished by pressurizing the feed side of the membrane. Proprietary antifouling products, such as Nalco's Membrane Performance Enhancer Technology. In addition, different types and intensities of chemical cleaning may also be recommended on typical schedules: Chemically enhanced backwash (daily); Maintenance cleaning with higher chemical concentration (weekly); Intensive chemical cleaning (once or twice a year). Intensive cleaning may also be carried out when further filtration cannot be sustained because of an elevated transmembrane pressure. Each of the four membrane bioreactor suppliers Kubota, Evoqua, Mitsubishi and GE Water have their own chemical cleaning recipes; these differ mainly in terms of concentration and methods (see Table 1). Under normal conditions, the prevalent cleaning agents are NaOCl (sodium hypochlorite) and citric acid. It is common for membrane bioreactor suppliers to adapt specific protocols for chemical cleanings (i.e. chemical concentrations and cleaning frequencies) for individual facilities. Biological performances/kinetics Chemical oxygen demand removal and sludge yield Simply due to the high number of microorganisms in membrane bioreactors, pollutant uptake rates can be increased. This leads to better degradation in a given time span or to smaller required reactor volumes. In comparison to conventional activated sludge process treatments which typically achieve 95 percent removal, removal can be increased to 96 to 99 percent in membrane bioreactors (see table,). Chemical oxygen demand (COD) and biological oxygen demand (BOD5) removal is found to increase with mixed liquor suspended solids concentration. Above 15 g/L, COD removal becomes almost independent of biomass concentration at >96 percent. Arbitrary high suspended solids concentrations are not employed, however, lest oxygen transfer be impeded due to higher viscosity and non-Newtonian viscosity effects. Kinetics may also differ due to easier substrate access. In typical activated sludge process treatment, flocs may reach several 100 μm in size. This means that the substrate can reach the active sites only by diffusion which causes an additional resistance and limits the overall reaction rate (diffusion-controlled). Hydrodynamic stress in membrane bioreactors reduces floc size (to 3.5 μm in side stream configurations) and thereby increases the effective reaction rate. Like in the conventional activated sludge process, sludge yield is decreased at higher solids retention times or biomass concentrations. Little or no sludge is produced at sludge loading rates of 0.01 kgCOD/(kgMLSS d). Because of the imposed biomass concentration limit, such low loading rates would result in enormous tank sizes or long hydrodynamic residence times in conventional activated sludge processes. Nutrient removal Nutrient removal is one of the main concerns in modern wastewater treatment, especially, in areas that are sensitive to eutrophication. Nitrogen (N) is a pollutant present in wastewater that must be eliminated for multiple reasons: it reduces dissolved oxygen in surface waters, is toxic to the aquatic ecosystem, poses a risk to public health, and together with phosphorus (P), are responsible for the excessive growth of photosynthetic organisms like algae. All these factors make its reduction focus on wastewater treatment. In wastewater, nitrogen can be present in multiple forms. Like in the conventional activated sludge process, currently, the most widely applied technology for N-removal from municipal wastewater is nitrification combined with denitrification, carried out by bacteria nitrifying and the involvement of facultative organisms. Besides phosphorus precipitation, enhanced biological phosphorus removal can be implemented which requires an additional anaerobic process step. Some characteristics for membrane bioreactor technology render enhanced biological phosphorus removal in combination with post-denitrification an attractive alternative that achieves very low nutrient effluent concentrations. For this, a membrane bioreactor improves the retention of solids, which provides a better biotreatment, supporting the development of slower-growing microorganisms, especially nitrifying ones, so that it makes them especially effective in the elimination of N (nitrification). Anaerobic MBRs Anaerobic membrane bioreactors (sometimes abbreviated AnMBR) were introduced in the 1980s in South Africa. However, anaerobic processes are normally used when a low-cost treatment is required that enables energy recovery but does not achieve advanced treatment (low carbon removal, no nutrients removal). In contrast, membrane-based technologies enable advanced treatment (disinfection), but at a high energy cost. Therefore, the combination of both can only be economically viable if a compact process for energy recovery is desired, or when disinfection is required after anaerobic treatment (cases of water reuse with nutrients). If maximal energy recovery is desired, a single anaerobic process will always be superior to a combination with a membrane process. Recently, anaerobic membrane bioreactors have seen successful full-scale application to the treatment of some types of industrial wastewaters—typically high-strength wastes. Example applications include the treatment of alcohol stillage wastewater in Japan and the treatment of salad dressing/barbecue sauce wastewater in the United States. Mixing and hydrodynamics Like in any other reactors, the hydrodynamics (or mixing) within a membrane bioreactor plays an important role in determining the pollutant removal and fouling control within the system. It has a substantial effect on energy usage and size requirements, and therefore the whole life cost of a membrane bioreactor is high. The removal of pollutants is greatly influenced by the length of time fluid elements spend in the membrane bioreactor (i.e. the residence time distribution). The residence time distribution is a description of the hydrodynamics of mixing in the system and it is determined by the design of the reactor (e.g. size, inlet/recycle flow rates, wall/baffle/mixer/aerator positioning, mixing energy input). An example of the effect of mixing is that a continuous stirred-tank reactor will not have as high pollutant conversion per unit volume of reactor as a plug flow reactor. The control of fouling, as previously mentioned, is primarily achieved via coarse bubble aeration. The distribution of bubbles around the membranes, the shear at the membrane surface for cake removal and the size of the bubble are greatly influenced by the hydrodynamics of the system. The mixing within the system can also influence the production of possible foulants. For example, vessels not completely mixed (i.e. plug flow reactors) are more susceptible to the effects of shock loads which may cause cell lysis and release of soluble microbial products. Many factors affect the hydrodynamics of wastewater processes and hence membrane bioreactors. These range from physical properties (e.g. mixture rheology and gas/liquid/solid density etc.) to fluid boundary conditions (e.g. inlet/outlet/recycle flow rates, baffle/mixer position etc.). However, some factors are peculiar to membrane bioreactors and these include the filtration tank design (e.g. membrane type, multiple outlets attributed to membranes, membrane packing density, membrane orientation, etc.) and its operation (e.g. membrane relaxation, membrane backflush, etc.). The mixing modeling and design techniques applied to membrane bioreactors are very similar to those used for conventional activated sludge systems. They include the relatively quick and easy compartmental modelling technique which will only derive the residence time distribution of a process (e.g. the reactor) or a process unit (e.g. the membrane filtration vessel) and which relies on broad assumptions of the mixing properties of each sub-unit. Computational fluid dynamics modeling, on the other hand, does not rely on broad assumptions about the mixing characteristics and instead attempts to predict the hydrodynamics from a fundamental level. It is applicable to all scales of fluid flow and can reveal much information about the mixing in a process, ranging from the residence time distribution to the shear profile on a membrane surface. A visualization of such modeling results is shown in the image. Investigations of membrane bioreactor hydrodynamics have occurred at many different scales ranging from examination of shear stress at the membrane surface to residence time distribution analysis for a complete membrane bioreactor. Cui et al. (2003) investigated the movement of Taylor bubbles through tubular membranes. Khosravi, M. (2007) examined an entire membrane filtration vessel using CFD and velocity measurements. Brannock et al. (2007) examined an entire MBR system using tracer study experiments and RTD analysis. Advantages Some of the advantages provided by membrane bioreactors are as follows. High quality effluent: given the small size of the membrane's pores, the effluent is clear and pathogen free. Independent control of solids retention time and hydraulic retention time: As all the biological solids are contained in the bioreactor, the solids retention time can be controlled independently from the hydrodynamic retention time. Small footprint: thanks to the membrane filtration, there is a high biomass concentration contained in a small volume. Robust to load variations: membrane bioreactors can be operated with a broad range of operation conditions. Compact process: compared to the conventional activated sludge process, membrane bioreactors are more compact. Market framework Regional insights The market for membrane bioreactors is segmented based on end-user type, such as municipal and industrial users, and end-user geography, for instance Europe, Middle East and Africa (EMEA), Asia-Pacific (APAC), and the Americas. In this line, in 2016, some studies and reports showed that the APAC region took the lead in terms of market share, owning 41.90%. On the other hand, the EMEA region's market share is approximately 31.34% and the Americas constitute 26.67% of the market. APAC has the largest membrane bioreactors market. Developing economies such as India, China, Indonesia, and the Philippines are major contributors to growth in this market region. APAC is considered one of the most disaster-prone regions in the world: in 2013, thousands of people died from water-related disasters in the region, accounting for nine-tenth of the water-related deaths, globally. In addition to this, the public water supply system in the region is not as developed when compared to other countries such as the US, Canada, the countries in Europe, etc. The membrane bioreactors market in the EMEA region has witnessed stable growth. Countries such as Saudi Arabia, the UAE, Kuwait, Algeria, Turkey, and Spain are major contributors to that growth rate. Scarcity of clean and fresh water is the key driver for the increasing demand for efficient water treatment technologies. In this regard, increased awareness about water treatment and safe drinking water is also driving the growth. Ultimately, the Americas region has been witnessing major demand from countries including the US, Canada, Antigua, Argentina, Brazil, and Chile. The membrane bioreactor market has grown on account of stringent regulatory enforcement towards the safe discharge of wastewater. The demand for this emerging technology comes mainly from the pharmaceuticals, food & beverages, automotive, and chemicals industries. See also List of waste-water treatment technologies Activated sludge model Membrane fouling Hollow fiber membrane References Sewerage Environmental engineering Bioreactors Membrane technology Water treatment it:Processo a membrana#Applicazioni della tecnologia MBR al trattamento delle acque reflue
Membrane bioreactor
[ "Chemistry", "Engineering", "Biology", "Environmental_science" ]
5,737
[ "Bioreactors", "Biological engineering", "Separation processes", "Water treatment", "Chemical reactors", "Chemical engineering", "Biochemical engineering", "Water pollution", "Sewerage", "Microbiology equipment", "Membrane technology", "Civil engineering", "Environmental engineering", "Wat...
7,648,694
https://en.wikipedia.org/wiki/Enterprise%20systems%20engineering
Enterprise systems engineering (ESE) is the discipline that applies systems engineering to the design of an enterprise. As a discipline, it includes a body of knowledge, principles, and processes tailored to the design of enterprise systems. An enterprise is a complex, socio-technical system that comprises interdependent resources of people, information, and technology that must interact to fulfill a common mission. Enterprise systems engineering incorporates all the tasks of traditional systems engineering but is further informed by an expansive view of the political, operational, economic, and technological (POET) contexts in which the system(s) under consideration are developed, acquired, modified, maintained, or disposed. Enterprise systems engineering may be appropriate when the complexity of the enterprise exceeds the scope of the assumptions upon which textbook systems engineering are based. Traditional systems engineering assumptions include relatively stable and well understood requirements, a system configuration that can be controlled, and a small, easily discernible set of stakeholders. An enterprise systems engineer must produce a different kind of analysis on the people, technology, and other components of the organization in order to see the whole enterprise. As the enterprise becomes more complex, with more parameters and people involved, it is important to integrate the system as much as possible to enable the organization to achieve a higher standard. Elements Four elements are needed for enterprise systems engineering to work. These include development through adaption, strategic technical planning, enterprise governance, and ESE processes (with stages). Development through adaptation Development through adaptation is a way to compromise with the problems and obstacles in complex systems. Over time, the environment changes and adaptation is required to continue development. For example, mobile phones have undergone numerous modifications since their introduction. Initially, the devices were considerably larger than those seen in later iterations. Over time, variations in size and design have been observed across different generations of mobile phones. Additionally, the evolution of mobile data technology from 1G to 5G has influenced the speed and convenience of mobile phone usage. Strategic technical planning Strategic technical planning (STP) gives the enterprise the picture of their aim and objectives. STP components are: Mission statement Needs assessment Technology descriptions and goal statement Hardware and software requirement Budget plan Human Resources Enterprise governance Enterprise governance is defined as 'the set of responsibilities and practices exercised by the board and executive management to provide strategic direction, ensure that objectives are achieved, ascertain that risks are managed appropriately and verify that the organization's resources are used responsibly,' according to CIMA Official Terminology. EG allows one to make the right decision on the choice of CEO and executives for the company, and also to identify the risks of the company. Processes Four steps comprise the enterprise system engineering process: technology planning (TP); capabilities-based engineering analysis (CBEA); enterprise architecture (EA); and enterprise analysis and assessment (EA&A). Technology planning TP looks for technologies key to the enterprise. This step aims to identify the innovative ideas and choose the technologies that are useful for the enterprise. Capabilities-based engineering analysis CBEA is an analysis method that focuses on elements that the whole enterprise needs. The three steps are purpose formulation, exploratory analysis, and evolutionary planning: Purpose formulation Assess stakeholder Interest – understand what the stakeholders want and like Specify outcome spaces – find solutions for several conditions and the goal for the operations Frame capability portfolios - collect fundamental elements Exploratory analysis Assess performance and cost – identify the performance and cost in different conditions and find solutions to improve Explore concepts – search for new concepts and transform advanced capabilities Determine the need for more variety – examine the risks and chances and decide whether new ways are needed Evolutionary planning Assess enterprise impacts – investigate the effects on the enterprise in technical and capability aspects Examine evolution strategies – explore and construct more strategies and evolution route Develop capability road map – plan for the capability area which includes analysis and decision making which is a tool for assessment and development for the enterprise Enterprise architecture EA is a model that illustrates the vision, network and framework of an organization. The four aspects (according to Michael Platt) are business prospects, application, information and technology. The diagram shows the structure of enterprise architecture. The benefits are improvement of decision making, increased IT efficiency and minimizes losses. Business – The strategies and processes by the operation of business Application – Interaction and communication along with the processes used in the company Information – The logical data and statistics that the organization requires to run properly and actively Technology – The software and hardware and different operational systems that are used in the company All the elements are dependent and rely on each other in order to build the infrastructure. Enterprise analysis and assessment Enterprise analysis and assessment aims to assess whether the enterprise is going in the right direction and help to make correct decisions. Qualities required for this step include awareness of technologies, knowing and understanding command and control (C2) issues, and using modeling and simulation (M&S) explore the implications. Activities and actions for this event include: Multi-scale analysis Early and continuous war fighter operational assessment Lightweight, portable M&S-based C2 capability representations Development software available for assessment Minimal infrastructure Flexible M&S operator-in-the-loop (OITL), and hardware-in-the-loop (HWIL) capabilities In-line, continuous performance monitoring and selective forensics Traditional systems engineering Traditional systems engineering (TSE) is a term to be defined as an engineering sub-system. Elements: TSE is conducted by an external designer It is a stable system which doesn't change automatically Operation and development are independent of each other People do not play an important role in it Massive machines have expected conduct A survey compared ESE and TSE. The survey reported that the two are complementary and interdependent. ESE had a higher rating while TSE could be part of ESE. The combination could be ideal. Applications The two types of ESE application are Information Enterprise Systems Engineering and Social Enterprise Systems Engineering. Information Enterprise Systems Engineering (IESE) It is a system that builds up to meet the requirements and expectations of different stakeholders in the organization. There must be an input device to collect the information and output device to satisfy the information needs. There are three different aspects for the framework of IESE: Functional view Topology view Physical view Also, there are different rules for the IESE model. Interchangeable point of view Detailed views and well displayed. Showing the specific method, solution and techniques Consistent views Supported viewpoints Social Enterprise System Engineering This is a framework that involves planning, analyzing, mapping, and drawing a network of the process for enterprises and stakeholders. Moreover, it creates social value for entrepreneurship and explores and focuses on social and societal issues. It forms a connection between social enterprise and system engineering. There is a Social Enterprise Systems Engineering V-model, in which two or more social elements are established based on the system engineering framework—for example, more social interface analysis that reviews stakeholders' requirements, and more activities and interactions between stakeholders to exchange opinion. Opportunity and risk management There are opportunities and risks in ESE and they have to be aggressive in seeking opportunities and also finding ways to avoid or minimize the risks. Opportunity is a trigger element that may lead to the accomplishment of objectives. Risk is a potential occurrence and will affect the performance of the entire system. There are several reasons for the importance of risk management. To identify the risks before head which can prepare actions to prevent or minimize the risks Since risks can cost the enterprise, determining the risk events can reduce the amount of loss Help to know how to allocate the human or technology resources in order avoid the most critical risks There are few steps in Enterprise risk and opportunity Management Process Prepare the risk and opportunity plan – Select team and representatives Identify Risks – Complete risks statements for each risk Identify Opportunities – People that work at tactical level and manager must understand the opportunities in order to take a further action Evaluate the Enterprise Risks and Opportunities – To decide which is more critical and vital Develop the plan – Develop after identification and evaluation with different strategies See also Enterprise architecture Enterprise engineering Enterprise life cycle Industrial engineering Systems engineering Soft systems methodology System of systems System of systems engineering (SoSE) Risk management plan Technology roadmap References 24. "Enterprise Architecture | Centric". Business Consulting. Retrieved 2024-1-30. Further reading R.E. Giachetti, (2010), Design of Enterprise Systems, CRC Press, Boca Raton, Florida. Oscar A. Saenz, and Chin-Sheng Chen (2004). "A Framework for Enterprise Systems Engineering" Robert S. Swarz, and Joseph K. DeRosa (2006). A Framework for Enterprise Systems Engineering Processes External links Department of Industrial and Enterprise Systems Engineering University of Illinois at Urbana-Champaign. MIT Engineering Systems Division Enterprise modelling Systems engineering
Enterprise systems engineering
[ "Engineering" ]
1,769
[ "Systems engineering", "Enterprise modelling" ]
7,651,012
https://en.wikipedia.org/wiki/Apollo%20M.%20O.%20Smith
Apollo Milton Olin Smith (usually referred to as A.M.O. Smith) (born July 2, 1911 – May 1, 1997) was an important figure in the aerodynamics field at Douglas Aircraft from 1938 to 1975 and an early pioneer in the area of computational fluid dynamics. Early life A.M.O. Smith was born in Columbia, Missouri. He graduated from Woodrow Wilson High School in Long Beach, California in 1929 and went on to study at Compton Junior College in Compton, California and finally the California Institute of Technology in Pasadena, California, where he received his BS in 1936 and his MS in 1938. While at Long Beach, he was a member of the Long Beach Glider Club along with John Pierce, one of the earliest glider clubs in southern California. While at Caltech, he built and tested a number of rockets with Professor Theodore von Kármán's students Frank Malina, Edward Forman, Jack Parsons and Tsien Hsue-shen. This work led to the formation of Aerojet and the Jet Propulsion Laboratory several years later. Career In June 1938, Smith was hired by the El Segundo Division of Douglas Aircraft. During his time there, he worked on aerodynamic and preliminary design problems of the DC-5, SBD Dauntless, DB-7 Boston, A-20 Havoc and A-26 Invader. In October 1942 he went on a leave of absence, at the request of General H.H. Arnold, to help organize and develop the newly formed Aerojet company as its first Chief Engineer. Under his guidance, the engineering organization at Aerojet grew from six people to over 400 by the time he left. This period saw the development and quantity production of the JATO type rocket at Aerojet. After he returned to Douglas Aircraft in March 1944, he resumed work in aerodynamics and preliminary design. He was responsible for the detailed aerodynamic design of the D-558-I Skystreak, which for a period held the world speed record. He was also responsible for the design of the F3D-1 Skynight. At the end of World War II, he was a member of the US Naval Technical Mission in Europe. In his three months touring captured German aeronautical facilities, he became familiar with the German work on the low drag properties of swept wings at transonic speeds and their development of tailless aircraft. After returning to Douglas, he proposed and began studies for a tailless aircraft. These studies culminated in the design and production of the F4D-1 Skyray interceptor. For a period, the F4D-1 held six FAI World Records, including absolute speed and climb performance. In 1948, he became the Supervisor of Design Research at Douglas, a position he held until 1954. During this period, he conducted research into a number of areas, including laminar flow control and a means of calculating low speed flow about arbitrary bodies - Computational Fluid Dynamics. In 1954 he became Supervisor of Aerodynamics Research and from 1969 to 1975 he was Chief Aerodynamics Engineer - Research at Douglas. In this period, he oversaw development of practical methods of analyzing laminar and turbulent boundary layer flow, new and improved static pressure probes, the hydrogen bubble technique of flow visualization, potential flow analysis, analysis of stability and transition of boundary layers and the en method of predicting boundary layer transition. In June 1975, he retired from what was then McDonnell Douglas. After retiring, he was appointed adjunct professor at UCLA, a position he held from 1975 to 1980. Personal life Smith was married to Elisabeth Caroline Krost on December 5, 1943. They had three children. References Cebeci, Tuncer, Legacy of a Gentle Genius; The Life of A.M.O. Smith, Horizons Publishing, 1999. See also Cebeci–Smith model Guggenheim Aeronautical Laboratory 1911 births 1997 deaths American aerospace engineers Early spaceflight scientists Fluid dynamicists Computational fluid dynamicists Businesspeople from Columbia, Missouri 20th-century American engineers 20th-century American businesspeople
Apollo M. O. Smith
[ "Chemistry" ]
812
[ "Fluid dynamicists", "Fluid dynamics" ]
7,651,674
https://en.wikipedia.org/wiki/Ceramic%20foam
Ceramic foam is a tough foam made from ceramics. Manufacturing techniques include impregnating open-cell polymer foams internally with ceramic slurry and then firing in a kiln, leaving only ceramic material. The foams may consist of several ceramic materials such as aluminium oxide, a common high-temperature ceramic, and gets insulating properties from the many tiny air-filled voids within the material. The foam can be used not only for thermal insulation, but for a variety of other applications such as acoustic insulation, absorption of environmental pollutants, filtration of molten metal alloys, and as substrate for catalysts requiring large internal surface area. It has been used as stiff lightweight structural material, specifically for support of reflecting telescope mirrors. Properties Ceramic foams are hardened ceramics with pockets of air or another gas trapped in pores throughout the body of the material. With its ability to create a large specific surface area, these materials can be fabricated as high as 94 to 96% air by volume with temperature resistances as high as 1700 °C. Because many ceramics are already oxides or other inert compounds, there is little danger of oxidation or reduction of the material. Previously, pores had been avoided in ceramic components due to their brittle properties. However, in practice ceramic foams have somewhat advantageous mechanical properties, showing high strength and plastic toughness, compared to bulk ceramics. One example is crack propagation, given by: where σt is the stress at the tip of the crack, σ is the applied stress, a is the crack size and r is the radius of curvature. For certain stress applications, this means ceramic foams actually outperform bulk ceramics because the porous pockets of air act to blunt the crack tip radius, leading to a disruption of its propagation and a decrease in the likelihood of failure. Preparation methods Organic foam impregnating method The organic foam impregnating method is one of the more widely used in industry, creating the ceramic foam with a 3D mesh skeleton structure and coat a ceramic slurry on a polyurethane organic foam mesh body. The ceramic foam is obtained by allowing the body to dry at room temperature and burn the mesh body to retrieve the ceramic foam. This method is best used to prepare silicon carbide foam ceramics. Foaming Method The foaming method uses a chemical reaction of a foaming agent. The foaming agent generates volatile gas that foams the slurry. The slurry is dried and sintered to obtain the ceramic foam. The product’s shape and density can be controlled and manipulated with the foaming method. This method can be used in the preparation of small pore size closed cell ceramics. Manufacturing Much like metal foams, there are a number of accepted methods for creating ceramic foams. One of the earliest and still most common is the polymeric sponge method. A polymeric sponge is covered with a ceramic in suspension, and after rolling to ensure all pores have been filled, the ceramic-coated sponge is dried and pyrolysed to decompose the polymer, leaving only the porous ceramic structure. The foam must then be sintered for final densification. This method is widely used because it is effective with any ceramic able to be suspended; however, large amounts of gaseous byproducts are released and cracking due to differences in thermal expansion coefficients is common. While the above are both based on the use of a sacrificial template, there are also direct foaming methods that can be used. These methods involve pumping air into a suspended ceramic before setting and sintering. This is difficult because wet foams are thermodynamically unstable and can end up with very large pores after setting. A recent method of creating aluminum oxide foams has also been developed. This technique involves heating crystals with the metal and forming compounds until a solution is created. At this point, polymer chains form and grow, causing the entire mixture to separate into a solvent and polymer. As the mixture begins to boil, air bubbles are trapped in solution and locked in to place as the material is heated and polymer is burned off. Use Insulation Due to ceramics' extremely low thermal conductivity, the most obvious use of a ceramic is as an insulation material. Ceramic foams are notable in this regard because their composition by very common compounds, such as aluminum oxide, makes them completely harmless, unlike asbestos and other ceramic fibers. Their high strength and hardness also allows them to be used as structural materials for low stress applications. Electronics With easily controlled porosities and microstructures, ceramic foams have seen growing use in evolving electronics applications. These applications include electrodes, and scaffolds for solid oxide fuel cells and batteries. Foams can also be used as cooling components for electronics by separating a pumped coolant from the circuits themselves. For this application, silica, aluminum oxide, and aluminum borosilicate fibers can be used. Pollution Control Ceramic foams have been proposed as a means of pollutant control, particularly for particulate matter from engines. They are effective because the voids can capture particulates as well as support a catalyst that can induce oxidation of the captured particulates. Due to the easy means of deposition of other materials within ceramic foams, these oxidation-inducing catalysts can easily be distributed through the entire foam, increasing effectiveness. Filtering Ceramic foam filters (CFF) are used for the filtration of liquid metal. Passing liquid metal through the ceramic foam filter reduces impurities, including nonmetallic inclusions, in the liquid metal and the corresponding finished product (casting, sheet, billet, etc). It has found success in its application and use in continuous casting (sheet), semi-continuous casting (billet and slab), and casting gating systems in metal foundries. Wastewater Treatment Due to the foam’s unique pore structure and large specific surface area, it sees a use as a filter for wastewater. The filtration process is a combination of adsorption, surface filtration, and deep filtration with deep filtration providing a majority of the filtration process. Construction Close-cell ceramic foam serves as a good insulation material for walls and roofs. The large number of closed cells allow the material to be resistant to corrosion and absorb sound internally and externally. Buildings in China have utilized ceramic foam as a thermal insulation material. Noise reduction Foam ceramics has its use in sound absorption in wet and oily environments. The sound waves vibrate in the pores of the foam and transform the energy into heat through friction and air resistance, thus reducing echos in the environment. Automobile Due to the three-dimensional connected mesh structure, high temperature resistance, and thermal stability of ceramic foam, its use in catalytic converters in exhaust systems help remove oxides and other particulate matter from exhaust gasses. Biomaterial Current research sees ceramic foams often formulated with Bioglass to create tissue scaffolds for bone repair Their porous characteristic shows promise in load-bearing bone tissue engineering applications. The bioglass allows the material to be bioactive and form hyaluronic acid on the surface of the material as biological fluid contacts with glass-ceramic foam. Glass-ceramic shows promise as its properties of having adequate porosity to allow cells to migrate through the scaffold, high mechanical strength to bear load, and good bioactivity to allow cells to flourish. References Ceramic engineering Ceramic materials Foams Building insulation materials
Ceramic foam
[ "Chemistry", "Engineering" ]
1,522
[ "Ceramic engineering", "Foams", "Ceramic materials" ]
7,652,313
https://en.wikipedia.org/wiki/Leaf%20angle%20distribution
The leaf angle distribution (or LAD) of a plant canopy refers to the mathematical description of the angular orientation of the leaves in the vegetation. Specifically, if each leaf is conceptually represented by a small flat plate, its orientation can be described with the zenith and the azimuth angles of the surface normal to that plate. If the leaf has a complex structure and is not flat, it may be necessary to approximate the actual leaf by a set of small plates, in which case there may be a number of leaf normals and associated angles. The LAD describes the statistical distribution of these angles. Examples of leaf angle distributions Different plant canopies exhibit different LADs: For instance, grasses and willows have their leaves largely hanging vertically (such plants are said to have an erectophile LAD), while oaks tend to maintain their leaves more or less horizontally (these species are known as having a planophile LAD). In some tree species, leaves near the top of the canopy follow an erectophile LAD while those at the bottom of the canopy are more planophile. This may be interpreted as a strategy by that plant species to maximize exposure to light, an important constraint to growth and development. Yet other species (notably sunflower) are capable of reorienting their leaves throughout the day to optimize exposure to the Sun: this is known as heliotropism. Importance of LAD The LAD of a plant canopy has a significant impact on the reflectance, transmittance and absorption of solar light in the vegetation layer, and thus also on its growth and development. LAD can also serve as a quantitative index to monitor the state of the plants, as wilting usually results in more erectophile LADs. Models of radiation transfer need to take this distribution into account to predict, for instance, the albedo or the productivity of the canopy. Measuring LAD Accurately measuring the statistical properties of leaf angle distributions is not a trivial matter, especially for small leaves. Clinometers can be used but may be rather bulky or inconvenient. Most leaves are quite light and tend to move with the slightest breeze or air turbulence. Nevertheless, when these environmental conditions are suitable or can be controlled, it is possible to acquire data on leaf orientation. This may be done, for instance, with a Spatial Coordinate Apparatus, which is an articulated mechanical device capable or recording the position in three-dimensional space of three separate points forming a small triangle. The orientation of the triangle is computed from these coordinates. Yet another approach is to use a laser scanner: this instrument can record the angular and distance coordinates of the intersection of a laser beam with objects within its range. The LAD of canopy leaves can be derived from such measurements. Extensive data sets of LAD have been recorded, especially during intensive field campaigns, such as the First ISLSCP Field Experiment (FIFE) or the SAFARI 2000 field campaign. See the external links below for getting access to these data. LAD functions In general, LAD are modelled with one- or two-parameters functions including ellipsoidal, rotated-ellipsoidal and Beta functions. Its forms or parameters can be estimated from in-situ data such as hemispherical photography and lidar ranging data. Comparison between different LAD functions with in-situ measurements show that, two-parameter functions (especially Beta function) may perform better than one-parameter functions. See also Orixa japonica, a plant with an unusual leaf angle distribution References J. Ross (1981) The Radiation Regime and Architecture of Plant Stands, 391 pp., W. Junk, Boston, Massachusetts. David M. Gates (1980) Biophysical Ecology, Springer-Verlag, New York, 611 pp., (especially pages 379–381). Wang W. M., Li Z.-L. and Su H.-B., 2007, Comparison of leaf angle distribution functions: effects on extinction coefficient and sunlit foliage, Agricultural and Forest Meteorology, 2007, Vol. 143, NO. 1-2, pp. 106–122. External links Leaf angle data (FIFE) Safari 2000 canopy structural measurements, Kalahari Transect, wet season 2001 Radiation Plant morphology Leaves
Leaf angle distribution
[ "Physics", "Chemistry", "Biology" ]
855
[ "Transport phenomena", "Physical phenomena", "Plants", "Plant morphology", "Waves", "Radiation" ]
15,028,328
https://en.wikipedia.org/wiki/MED30
Mediator of RNA polymerase II transcription subunit 30 is an enzyme that in humans is encoded by the MED30 gene. It represents subunit Med30 of the Mediator complex and is metazoan-specific, having no homologues in yeasts. Interactions MED30 has been shown to interact with MED22. References Further reading Protein domains
MED30
[ "Biology" ]
70
[ "Protein domains", "Protein classification" ]
15,029,253
https://en.wikipedia.org/wiki/BATF%20%28gene%29
Basic leucine zipper transcription factor, ATF-like, also known as BATF, is a protein which in humans is encoded by the gene. Function The protein encoded by this gene is a nuclear basic leucine zipper (bZIP) protein that belongs to the AP-1/ATF superfamily of transcription factors. The leucine zipper of this protein mediates dimerization with members of the Jun family of proteins. This protein is thought to be a negative regulator of AP-1/ATF transcriptional events. Mice without the BATF gene (BATF knockout mice) lacked a type of inflammatory immune cell (Th17) and were resistant to conditions that normally induces an autoimmune condition similar to multiple sclerosis. Interactions BATF (gene) has been shown to interact with IFI35. References Further reading External links Transcription factors
BATF (gene)
[ "Chemistry", "Biology" ]
178
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
15,029,327
https://en.wikipedia.org/wiki/MXD4
Max-interacting transcriptional repressor MAD4 is a protein that in humans is encoded by the MXD4 gene. Function This gene is a member of the MAD gene family . The MAD genes encode basic helix-loop-helix-leucine zipper proteins that heterodimerize with MAX protein, forming a transcriptional repression complex. The MAD proteins compete for MAX binding with MYC, which heterodimerizes with MAX forming a transcriptional activation complex. Studies in rodents suggest that the MAD genes are tumor suppressors and contribute to the regulation of cell growth in differentiating tissues. References Further reading External links Transcription factors
MXD4
[ "Chemistry", "Biology" ]
131
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
15,030,334
https://en.wikipedia.org/wiki/CNGB1
Cyclic nucleotide gated channel beta 1, also known as CNGB1, is a human gene encoding an ion channel protein. See also Cyclic nucleotide-gated ion channel References Further reading External links GeneReviews/NCBI/NIH/UW entry on Retinitis Pigmentosa Overview Ion channels
CNGB1
[ "Chemistry" ]
68
[ "Neurochemistry", "Ion channels" ]
15,030,445
https://en.wikipedia.org/wiki/EGR3
Early growth response protein 3 is a protein in humans, encoded by the EGR3 gene. The gene encodes a transcriptional regulator that belongs to the EGR family of C2H2-type zinc-finger proteins. It is an immediate-early growth response gene which is induced by mitogenic stimulation. The protein encoded by this gene participates in the transcriptional regulation of genes in controlling biological rhythm. It may also play a role in muscle development. References Further reading External links Transcription factors
EGR3
[ "Chemistry", "Biology" ]
100
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
13,925,681
https://en.wikipedia.org/wiki/Bramble%E2%80%93Hilbert%20lemma
In mathematics, particularly numerical analysis, the Bramble–Hilbert lemma, named after James H. Bramble and Stephen Hilbert, bounds the error of an approximation of a function by a polynomial of order at most in terms of derivatives of of order . Both the error of the approximation and the derivatives of are measured by norms on a bounded domain in . This is similar to classical numerical analysis, where, for example, the error of linear interpolation can be bounded using the second derivative of . However, the Bramble–Hilbert lemma applies in any number of dimensions, not just one dimension, and the approximation error and the derivatives of are measured by more general norms involving averages, not just the maximum norm. Additional assumptions on the domain are needed for the Bramble–Hilbert lemma to hold. Essentially, the boundary of the domain must be "reasonable". For example, domains that have a spike or a slit with zero angle at the tip are excluded. Lipschitz domains are reasonable enough, which includes convex domains and domains with continuously differentiable boundary. The main use of the Bramble–Hilbert lemma is to prove bounds on the error of interpolation of function by an operator that preserves polynomials of order up to , in terms of the derivatives of of order . This is an essential step in error estimates for the finite element method. The Bramble–Hilbert lemma is applied there on the domain consisting of one element (or, in some superconvergence results, a small number of elements). The one-dimensional case Before stating the lemma in full generality, it is useful to look at some simple special cases. In one dimension and for a function that has derivatives on interval , the lemma reduces to where is the space of all polynomials of degree at most and indicates the th derivative of a function . In the case when , , , and is twice differentiable, this means that there exists a polynomial of degree one such that for all , This inequality also follows from the well-known error estimate for linear interpolation by choosing as the linear interpolant of . Statement of the lemma Suppose is a bounded domain in , , with boundary and diameter . is the Sobolev space of all function on with weak derivatives of order up to in . Here, is a multiindex, and denotes the derivative times with respect to , times with respect to , and so on. The Sobolev seminorm on consists of the norms of the highest order derivatives, and is the space of all polynomials of order up to on . Note that for all and , so has the same value for any . Lemma (Bramble and Hilbert) Under additional assumptions on the domain , specified below, there exists a constant independent of and such that for any there exists a polynomial such that for all The original result The lemma was proved by Bramble and Hilbert under the assumption that satisfies the strong cone property; that is, there exists a finite open covering of and corresponding cones with vertices at the origin such that is contained in for any . The statement of the lemma here is a simple rewriting of the right-hand inequality stated in Theorem 1 in. The actual statement in is that the norm of the factorspace is equivalent to the seminorm. The norm is not the usual one but the terms are scaled with so that the right-hand inequality in the equivalence of the seminorms comes out exactly as in the statement here. In the original result, the choice of the polynomial is not specified, and the value of constant and its dependence on the domain cannot be determined from the proof. A constructive form An alternative result was given by Dupont and Scott under the assumption that the domain is star-shaped; that is, there exists a ball such that for any , the closed convex hull of is a subset of . Suppose that is the supremum of the diameters of such balls. The ratio is called the chunkiness of . Then the lemma holds with the constant , that is, the constant depends on the domain only through its chunkiness and the dimension of the space . In addition, can be chosen as , where is the averaged Taylor polynomial, defined as where is the Taylor polynomial of degree at most of centered at evaluated at , and is a function that has derivatives of all orders, equals to zero outside of , and such that Such function always exists. For more details and a tutorial treatment, see the monograph by Brenner and Scott. The result can be extended to the case when the domain is the union of a finite number of star-shaped domains, which is slightly more general than the strong cone property, and other polynomial spaces than the space of all polynomials up to a given degree. Bound on linear functionals This result follows immediately from the above lemma, and it is also called sometimes the Bramble–Hilbert lemma, for example by Ciarlet. It is essentially Theorem 2 from. Lemma Suppose that is a continuous linear functional on and its dual norm. Suppose that for all . Then there exists a constant such that References External links Lemmas in analysis Approximation theory Finite element method
Bramble–Hilbert lemma
[ "Mathematics" ]
1,045
[ "Theorems in mathematical analysis", "Approximation theory", "Mathematical relations", "Lemmas in mathematical analysis", "Lemmas", "Approximations" ]
13,933,421
https://en.wikipedia.org/wiki/Ionic%20conductivity%20%28solid%20state%29
Ionic conductivity (denoted by ) is a measure of a substance's tendency towards ionic conduction. Ionic conduction is the movement of ions. The phenomenon is observed in solids and solutions. Ionic conduction is one mechanism of current. In crystalline solids In most solids, ions rigidly occupy fixed positions, strongly embraced by neighboring atoms or ions. In some solids, selected ions are highly mobile allowing ionic conduction. The mobility increases with temperature. Materials exhibiting this property are used in batteries. A well-known ion conductive solid is β''-alumina ("BASE"), a form of aluminium oxide that has channels through which sodium cations can hop. When this ceramic is complexed with a mobile ion, such as Na+, it behaves as so-called fast ion conductor. BASE is used as a membrane in several types of molten salt electrochemical cell. History Ionic conduction in solids has been a subject of interest since the beginning of the 19th century. Michael Faraday established in 1839 that the laws of electrolysis are also obeyed in ionic solids like lead(II) fluoride () and silver sulfide (). In 1921, solid silver iodide () was found to have had extraordinary high ionic conductivity at temperatures above 147 °C, AgI changes into a phase that has an ionic conductivity of ~ 1 –1 cm−1. This high temperature phase of AgI is an example of a superionic conductor. The disordered structure of this solid allows the Ag+ ions to move easily. The present record holder for ionic conductivity is the related material . β''-alumina was developed at the Ford Motor Company in the search for a storage device for electric vehicles while developing the sodium–sulfur battery. See also Lattice energy Fast ion conductor NASICON References External links J Chem Phys Physical quantities Electrochemistry
Ionic conductivity (solid state)
[ "Physics", "Chemistry", "Mathematics" ]
383
[ "Physical phenomena", "Physical chemistry stubs", "Physical quantities", "Quantity", "Physical properties", "Electrochemistry", "Electrochemistry stubs" ]
13,933,711
https://en.wikipedia.org/wiki/Dynamic%20voltage%20scaling
In computer architecture, dynamic voltage scaling is a power management technique in which the voltage used in a component is increased or decreased, depending upon circumstances. Dynamic voltage scaling to increase voltage is known as overvolting; dynamic voltage scaling to decrease voltage is known as undervolting. Undervolting is done in order to conserve power, particularly in laptops and other mobile devices, where energy comes from a battery and thus is limited, or in rare cases, to increase reliability. Overvolting is done in order to support higher frequencies for performance. The term "overvolting" is also used to refer to increasing static operating voltage of computer components to allow operation at higher speed (overclocking). Background MOSFET-based digital circuits operate using voltages at circuit nodes to represent logical state. The voltage at these nodes switches between a high voltage and a low voltage during normal operation—when the inputs to a logic gate transition, the transistors making up that gate may toggle the gate's output. Toggling a MOSFET's state requires changing its gate voltage from below the transistor's threshold voltage to above it (or from above it to below it). However, changing the gate's voltage requires charging or discharging the capacitance at its node. This capacitance is the sum of capacitances from various sources: primarily transistor gate capacitance, diffusion capacitance, and wires (coupling capacitance). Higher supply voltages result in faster slew rate (rate of change of voltage per unit of time) when charging and discharging, which allows for quicker transitioning through the MOSFET's threshold voltage. Additionally, the more the gate voltage exceeds the threshold voltage, the lower the resistance of the transistor's conducting channel. This results in a lower RC time constant for quicker charging and discharging of the capacitance of the subsequent logic stage. Quicker transitioning afforded by higher supply voltages allows for operating at higher frequencies. Methods Many modern components allow voltage regulation to be controlled through software (for example, through the BIOS). It is usually possible to control the voltages supplied to the CPU, RAM, PCI, and PCI Express (or AGP) port through a PC's BIOS. However, some components do not allow software control of supply voltages, and hardware modification is required by overclockers seeking to overvolt the component for extreme overclocks. Video cards and motherboard northbridges are components which frequently require hardware modifications to change supply voltages. These modifications are known as "voltage mods" or "Vmod" in the overclocking community. Undervolting Undervolting is reducing the voltage of a component, usually the processor, reducing temperature and cooling requirements, and possibly allowing a fan to be omitted. Just like overclocking, undervolting is highly subject to the so-called silicon lottery: one CPU can undervolt slightly better than the other and vice versa. Power The switching power dissipated by a chip using static CMOS gates is , where is the capacitance being switched per clock cycle, is the supply voltage, is the switching frequency, and is the activity factor. Since is squared, this part of the power consumption decreases quadratically with voltage. The formula is not exact however, as many modern chips are not implemented using 100% CMOS, but also use special memory circuits, dynamic logic such as domino logic, etc. Moreover, there is also a static leakage current, which has become more and more accentuated as feature sizes have become smaller (below 90 nanometres) and threshold levels lower. Accordingly, dynamic voltage scaling is widely used as part of strategies to manage switching power consumption in battery powered devices such as cell phones and laptop computers. Low voltage modes are used in conjunction with lowered clock frequencies to minimize power consumption associated with components such as CPUs and DSPs; only when significant computational power is needed will the voltage and frequency be raised. Some peripherals also support low voltage operational modes. For example, low power MMC and SD cards can run at 1.8 V as well as at 3.3 V, and driver stacks may conserve power by switching to the lower voltage after detecting a card which supports it. When leakage current is a significant factor in terms of power consumption, chips are often designed so that portions of them can be powered completely off. This is not usually viewed as being dynamic voltage scaling, because it is not transparent to software. When sections of chips can be turned off, as for example on TI OMAP3 processors, drivers and other support software need to support that. Program execution speed The speed at which a digital circuit can switch states - that is, to go from "low" (VSS) to "high" (VDD) or vice versa - is proportional to the voltage differential in that circuit. Reducing the voltage means that circuits switch slower, reducing the maximum frequency at which that circuit can run. This, in turn, reduces the rate at which program instructions that can be issued, which may increase run time for program segments which are sufficiently CPU-bound. This again highlights why dynamic voltage scaling is generally done in conjunction with dynamic frequency scaling, at least for CPUs. There are complex tradeoffs to consider, which depend on the particular system, the load presented to it, and power management goals. When quick responses are needed (e.g. Mobile Sensors and Context-Aware Computing), clocks and voltages might be raised together. Otherwise, they may both be kept low to maximize battery life. Implementations The 167-processor AsAP 2 chip enables individual processors to make extremely fast (on the order of 1-2ns) and locally controlled changes to their own supply voltages. Processors connect their local power grid to either a higher (VddHi) or lower (VddLow) supply voltage, or can be cut off entirely from either grid to dramatically cut leakage power. Another approach uses per-core on-chip switching regulators for dynamic voltage and frequency scaling (DVFS). Operating system API Unix system provides a userspace governor, allowing to modify the CPU frequencies (though limited to hardware capabilities). System stability Dynamic frequency scaling is another power conservation technique that works on the same principles as dynamic voltage scaling. Both dynamic voltage scaling and dynamic frequency scaling can be used to prevent computer system overheating, which can result in program or operating system crashes, and possibly hardware damage. Reducing the voltage supplied to the CPU below the manufacturer's recommended minimum setting can result in system instability. Temperature The efficiency of some electrical components, such as voltage regulators, decreases with increasing temperature, so the power used may increase with temperature causing thermal runaway. Increases in voltage or frequency may increase system power demands even faster than the CMOS formula indicates, and vice versa. Caveats The primary caveat of overvolting is increased heat: the power dissipated by a circuit increases with the square of the voltage applied, so even small voltage increases significantly affect power. At higher temperatures, transistor performance is adversely affected, and at some threshold, the performance reduction due to the heat exceeds the potential gains from the higher voltages. Overheating and damage to circuits can occur very quickly when using high voltages. There are also longer-term concerns: various adverse device-level effects such as hot carrier injection and electromigration occur more rapidly at higher voltages, decreasing the lifespan of overvolted components. In order to mitigate the increased heat from overvolting, it's recommended to use liquid cooling to achieve higher ceilings and thresholds than you normally would with an aftermarket cooler. Also known as 'all-in-one' (AIO) coolers, they offer a far more effective method of unit cooling by relocating heat outside a computer case via the fans on the radiator whereas air cooling only disperses heat from the affected unit, increasing overall ambient temperatures. See also Dynamic voltage and frequency scaling (DVFS) Dynamic frequency scaling Power gating Power–delay product (PDP) Energy–delay product (EDP) Switched-mode power supply applications (SMPS) applications Switching energy Power ramp Overvoltage Undervoltage Voltage optimization References Further reading (xxx+428 pages) Computer hardware tuning Energy conservation Voltage
Dynamic voltage scaling
[ "Physics", "Mathematics" ]
1,732
[ "Physical quantities", "Electrical systems", "Quantity", "Physical systems", "Voltage", "Wikipedia categories named after physical quantities" ]
13,934,390
https://en.wikipedia.org/wiki/Load%20line%20%28electronics%29
In graphical analysis of nonlinear electronic circuits, a load line is a line drawn on the current–voltage characteristic graph for a nonlinear device like a diode or transistor. It represents the constraint put on the voltage and current in the nonlinear device by the external circuit. The load line, usually a straight line, represents the response of the linear part of the circuit, connected to the nonlinear device in question. The points where the characteristic curve and the load line intersect are the possible operating point(s) (Q points) of the circuit; at these points the current and voltage parameters of both parts of the circuit match. The example at right shows how a load line is used to determine the current and voltage in a simple diode circuit. The diode, a nonlinear device, is in series with a linear circuit consisting of a resistor, R and a voltage source, VDD. The characteristic curve (curved line), representing the current I through the diode for any given voltage across the diode VD, is an exponential curve. The load line (diagonal line), representing the relationship between current and voltage due to Kirchhoff's voltage law applied to the resistor and voltage source, is Since the same current flows through each of the three elements in series, and the voltage produced by the voltage source and resistor is the voltage across the terminals of the diode, the operating point of the circuit will be at the intersection of the curve with the load line. In a circuit with a three terminal device, such as a transistor, the current–voltage curve of the collector-emitter current depends on the base current. This is depicted on graphs by a series of (IC–VCE) curves at different base currents. A load line drawn on this graph shows how the base current will affect the operating point of the circuit. Load lines for common configurations Transistor load line The load line diagram at right is for a resistive load in a common emitter circuit. The load line shows how the collector load resistor (RL) constrains the circuit voltage and current. The diagram also plots the transistor's collector current IC versus collector voltage VCE for different values of base current Ibase. The intersections of the load line with the transistor characteristic curves represent the circuit-constrained values of IC and VCE at different base currents. If the transistor could pass all the current available, with no voltage dropped across it, the collector current would be the supply voltage VCC over RL. This is the point where the load line crosses the vertical axis. Even at saturation, however, there will always be some voltage from collector to emitter. Where the load line crosses the horizontal axis, the transistor current is minimum (approximately zero). The transistor is said to be cut off, passing only a very small leakage current, and so very nearly the entire supply voltage appears as VCE. The operating point of the circuit in this configuration (labelled Q) is generally designed to be in the active region, approximately in the middle of the load line's active region for amplifier applications. Adjusting the base current so that the circuit is at this operating point with no signal applied is called biasing the transistor. Several techniques are used to stabilize the operating point against minor changes in temperature or transistor operating characteristics. When a signal is applied, the base current varies, and the collector-emitter voltage in turn varies, following the load line - the result is an amplifier stage with gain. A load line is normally drawn on Ic-Vce characteristics curves for the transistor used in an amplifier circuit. The same technique is applied to other types of non-linear elements such as vacuum tubes or field effect transistors. DC and AC load lines Semiconductor circuits typically have both DC and AC currents in them, with a source of DC current to bias the nonlinear semiconductor to the correct operating point, and the AC signal superimposed on the DC. Load lines can be used separately for both DC and AC analysis. The DC load line is the load line of the DC equivalent circuit, defined by reducing the reactive components to zero (replacing capacitors by open circuits and inductors by short circuits). It is used to determine the correct DC operating point, often called the Q point. Once a DC operating point is defined by the DC load line, an AC load line can be drawn through the Q point. The AC load line is a straight line with a slope equal to the AC impedance facing the nonlinear device, which is in general different from the DC resistance. The ratio of AC voltage to current in the device is defined by this line. Because the impedance of the reactive components will vary with frequency, the slope of the AC load line depends on the frequency of the applied signal. So there are many AC load lines, that vary from the DC load line (at low frequency) to a limiting AC load line, all having a common intersection at the DC operating point. This limiting load line, generally referred to as the AC load line, is the load line of the circuit at "infinite frequency", and can be found by replacing capacitors with short circuits, and inductors with open circuits. References Electrical engineering
Load line (electronics)
[ "Engineering" ]
1,080
[ "Electrical engineering" ]
393,519
https://en.wikipedia.org/wiki/Voltage%20divider
In electronics, a voltage divider (also known as a potential divider) is a passive linear circuit that produces an output voltage (Vout) that is a fraction of its input voltage (Vin). Voltage division is the result of distributing the input voltage among the components of the divider. A simple example of a voltage divider is two resistors connected in series, with the input voltage applied across the resistor pair and the output voltage emerging from the connection between them. Resistor voltage dividers are commonly used to create reference voltages, or to reduce the magnitude of a voltage so it can be measured, and may also be used as signal attenuators at low frequencies. For direct current and relatively low frequencies, a voltage divider may be sufficiently accurate if made only of resistors; where frequency response over a wide range is required (such as in an oscilloscope probe), a voltage divider may have capacitive elements added to compensate load capacitance. In electric power transmission, a capacitive voltage divider is used for measurement of high voltage. General case A voltage divider referenced to ground is created by connecting two electrical impedances in series, as shown in Figure 1. The input voltage is applied across the series impedances Z1 and Z2 and the output is the voltage across Z2. Z1 and Z2 may be composed of any combination of elements such as resistors, inductors and capacitors. If the current in the output wire is zero then the relationship between the input voltage, Vin, and the output voltage, Vout, is: Proof (using Ohm's law): The transfer function (also known as the divider's voltage ratio) of this circuit is: In general this transfer function is a complex, rational function of frequency. Examples Resistive divider A resistive divider is the case where both impedances, Z1 and Z2, are purely resistive (Figure 2). Substituting Z1 = R1 and Z2 = R2 into the previous expression gives: If R1 = R2 then If Vout = 6 V and Vin = 9 V (both commonly used voltages), then: and by solving using algebra, R2 must be twice the value of R1. To solve for R1: To solve for R2: Any ratio Vout / Vin greater than 1 is not possible. That is, using resistors alone it is not possible to either invert the voltage or increase Vout above Vin. Low-pass RC filter Consider a divider consisting of a resistor and capacitor as shown in Figure 3. Comparing with the general case, we see Z1 = R and Z2 is the impedance of the capacitor, given by where XC is the reactance of the capacitor, C is the capacitance of the capacitor, j is the imaginary unit, and ω (omega) is the radian frequency of the input voltage. This divider will then have the voltage ratio: The product τ (tau) = RC is called the time constant of the circuit. The ratio then depends on frequency, in this case decreasing as frequency increases. This circuit is, in fact, a basic (first-order) low-pass filter. The ratio contains an imaginary number, and actually contains both the amplitude and phase shift information of the filter. To extract just the amplitude ratio, calculate the magnitude of the ratio, that is: Inductive divider Inductive dividers split AC input according to inductance: (with components in the same positions as Figure 2.) The above equation is for non-interacting inductors; mutual inductance (as in an autotransformer) will alter the results. Inductive dividers split AC input according to the reactance of the elements as for the resistive divider above. Capacitive divider Capacitive dividers do not pass DC input. For an AC input a simple capacitive equation is: (with components in the same positions as Figure 2.) Any leakage current in the capactive elements requires use of the generalized expression with two impedances. By selection of parallel R and C elements in the proper proportions, the same division ratio can be maintained over a useful range of frequencies. This is the principle applied in compensated oscilloscope probes to increase measurement bandwidth. Loading effect The output voltage of a voltage divider will vary according to the electric current it is supplying to its external electrical load. The effective source impedance coming from a divider of Z1 and Z2, as above, will be Z1 in parallel with Z2 (sometimes written Z1 // Z2), that is: (Z1 Z2) / (Z1 + Z2) = HZ1. To obtain a sufficiently stable output voltage, the output current must either be stable (and so be made part of the calculation of the potential divider values) or limited to an appropriately small percentage of the divider's input current. Load sensitivity can be decreased by reducing the impedance of both halves of the divider, though this increases the divider's quiescent input current and results in higher power consumption (and wasted heat) in the divider. Voltage regulators are often used in lieu of passive voltage dividers when it is necessary to accommodate high or fluctuating load currents. Applications Voltage dividers are used for adjusting the level of a signal, for bias of active devices in amplifiers, and for measurement of voltages. A Wheatstone bridge and a multimeter both include voltage dividers. A potentiometer is used as a variable voltage divider in the volume control of many radios. Sensor measurement Voltage dividers can be used to allow a microcontroller to measure the resistance of a sensor. The sensor is wired in series with a known resistance to form a voltage divider and a known voltage is applied across the divider. The microcontroller's analog-to-digital converter is connected to the center tap of the divider so that it can measure the tap voltage and, by using the measured voltage and the known resistance and voltage, compute the sensor resistance. This technique is commonly used to measure the resistance of temperature sensors such as thermistors and RTDs. Another example that is commonly used involves a potentiometer (variable resistor) as one of the resistive elements. When the shaft of the potentiometer is rotated the resistance it produces either increases or decreases, the change in resistance corresponds to the angular change of the shaft. If coupled with a stable voltage reference, the output voltage can be fed into an analog-to-digital converter and a display can show the angle. Such circuits are commonly used in reading control knobs. High voltage measurement A voltage divider can be used to scale down a very high voltage so that it can be measured by a volt meter. The high voltage is applied across the divider, and the divider output—which outputs a lower voltage that is within the meter's input range—is measured by the meter. High voltage resistor divider probes designed specifically for this purpose can be used to measure voltages up to 100 kV. Special high-voltage resistors are used in such probes as they must be able to tolerate high input voltages and, to produce accurate results, must have matched temperature coefficients and very low voltage coefficients. Capacitive divider probes are typically used for voltages above 100 kV, as the heat caused by power losses in resistor divider probes at such high voltages could be excessive. Logic level shifting A voltage divider can be used as a crude logic level shifter to interface two circuits that use different operating voltages. For example, some logic circuits operate at 5 V whereas others operate at 3.3 V. Directly interfacing a 5 V logic output to a 3.3 V input may cause permanent damage to the 3.3 V circuit. In this case, a voltage divider with an output ratio of 3.3/5 might be used to reduce the 5 V signal to 3.3 V, to allow the circuits to interoperate without damaging the 3.3 V circuit. For this to be feasible, the 5 V source impedance and 3.3 V input impedance must be negligible, or they must be constant and the divider resistor values must account for their impedances. If the input impedance is capacitive, a purely resistive divider will limit the data rate. This can be roughly overcome by adding a capacitor in series with the top resistor, to make both legs of the divider capacitive as well as resistive. See also Current divider DC-to-DC converter Voltage amplifier References External links Voltage Divider Calculator Analog circuits Divider
Voltage divider
[ "Physics", "Mathematics", "Engineering" ]
1,832
[ "Physical quantities", "Electrical systems", "Analog circuits", "Quantity", "Physical systems", "Electronic engineering", "Voltage", "Wikipedia categories named after physical quantities" ]
393,810
https://en.wikipedia.org/wiki/Richardson%20number
The Richardson number (Ri) is named after Lewis Fry Richardson (1881–1953). It is the dimensionless number that expresses the ratio of the buoyancy term to the flow shear term: where is gravity, is density, is a representative flow speed, and is depth. The Richardson number, or one of several variants, is of practical importance in weather forecasting and in investigating density and turbidity currents in oceans, lakes, and reservoirs. When considering flows in which density differences are small (the Boussinesq approximation), it is common to use the reduced gravity g' and the relevant parameter is the densimetric Richardson number which is used frequently when considering atmospheric or oceanic flows. If the Richardson number is much less than unity, buoyancy is unimportant in the flow. If it is much greater than unity, buoyancy is dominant (in the sense that there is insufficient kinetic energy to homogenize the fluids). If the Richardson number is of order unity, then the flow is likely to be buoyancy-driven: the energy of the flow derives from the potential energy in the system originally. Aviation In aviation, the Richardson number is used as a rough measure of expected air turbulence. A lower value indicates a higher degree of turbulence. Values in the range 10 to 0.1 are typical, with values below unity indicating significant turbulence. Thermal convection In thermal convection problems, Richardson number represents the importance of natural convection relative to the forced convection. The Richardson number in this context is defined as where g is the gravitational acceleration, is the thermal expansion coefficient, Thot is the hot wall temperature, Tref is the reference temperature, L is the characteristic length, and V is the characteristic velocity. The Richardson number can also be expressed by using a combination of the Grashof number and Reynolds number, Typically, the natural convection is negligible when Ri < 0.1, forced convection is negligible when Ri > 10, and neither is negligible when 0.1 < Ri < 10. It may be noted that usually the forced convection is large relative to natural convection except in the case of extremely low forced flow velocities. However, buoyancy often plays a significant role in defining the laminar–turbulent transition of a mixed convection flow. In the design of water filled thermal energy storage tanks, the Richardson number can be useful. Meteorology In atmospheric science, several different expressions for the Richardson number are commonly used: the flux Richardson number (which is fundamental), the gradient Richardson number, and the bulk Richardson number. The flux Richardson number is the ratio of buoyant production (or suppression) of turbulence kinetic energy to the production of turbulence by shear. Mathematically, this is: , where is the virtual temperature, is the virtual potential temperature, is the altitude, is the component of the wind, is the component of the wind, and is the (vertical) component of the wind. A prime (e.g. ) denotes a deviation of the respective field from its Reynolds average. The gradient Richardson number is arrived at by approximating the flux Richardson number using "K-theory". This results in: . The bulk Richardson number results from making a finite difference approximation to the derivatives in the expression for the gradient Richardson number, giving: . Here, for any variable , , i.e. the difference between at altitude and altitude . If the lower reference level is taken to be , then (due to the no-slip boundary condition), so the expression simplifies to: . Oceanography In oceanography, the Richardson number has a more general form which takes stratification into account. It is a measure of relative importance of mechanical and density effects in the water column, as described by the Taylor–Goldstein equation, used to model Kelvin–Helmholtz instability which is driven by sheared flows. where N is the Brunt–Väisälä frequency and u the wind speed. The Richardson number defined above is always considered positive. A negative value of N² (i.e. complex N) indicates unstable density gradients with active convective overturning. Under such circumstances the magnitude of negative Ri is not generally of interest. It can be shown that Ri < 1/4 is a necessary condition for velocity shear to overcome the tendency of a stratified fluid to remain stratified, and some mixing (turbulence) will generally occur. When Ri is large, turbulent mixing across the stratification is generally suppressed. References Atmospheric dispersion modeling Fluid dynamics Buoyancy Dimensionless numbers of fluid mechanics
Richardson number
[ "Chemistry", "Engineering", "Environmental_science" ]
935
[ "Chemical engineering", "Atmospheric dispersion modeling", "Environmental engineering", "Piping", "Environmental modelling", "Fluid dynamics" ]
393,813
https://en.wikipedia.org/wiki/Boussinesq%20approximation%20%28buoyancy%29
In fluid dynamics, the Boussinesq approximation (, named for Joseph Valentin Boussinesq) is used in the field of buoyancy-driven flow (also known as natural convection). It ignores density differences except where they appear in terms multiplied by , the acceleration due to gravity. The essence of the Boussinesq approximation is that the difference in inertia is negligible but gravity is sufficiently strong to make the specific weight appreciably different between the two fluids. The existence of sound waves in a Boussinesq fluid is not possible as sound is the result of density fluctuations within a fluid. Boussinesq flows are common in nature (such as atmospheric fronts, oceanic circulation, katabatic winds), industry (dense gas dispersion, fume cupboard ventilation), and the built environment (natural ventilation, central heating). The approximation can be used to simplify the equations describing such flows, whilst still describing the flow behaviour to a high degree of accuracy. Formulation The Boussinesq approximation is applied to problems where the fluid varies in temperature (or composition) from one place to another, driving a flow of fluid and heat transfer (or mass transfer). The fluid satisfies conservation of mass, conservation of momentum and conservation of energy. In the Boussinesq approximation, variations in fluid properties other than density are ignored, and density only appears when it is multiplied by , the gravitational acceleration. If is the local velocity of a parcel of fluid, the continuity equation for conservation of mass is If density variations are ignored, this reduces to The general expression for conservation of momentum of an incompressible, Newtonian fluid (the Navier–Stokes equations) is where (nu) is the kinematic viscosity and is the sum of any body forces such as gravity. In this equation, density variations are assumed to have a fixed part and another part that has a linear dependence on temperature: where is the coefficient of thermal expansion. The Boussinesq approximation states that the density variation is only important in the buoyancy term. If is the gravitational body force, the resulting conservation equation is In the equation for heat flow in a temperature gradient, the heat capacity per unit volume, , is assumed constant and the dissipation term is ignored. The resulting equation is where is the rate per unit volume of internal heat production and is the thermal conductivity. The three numbered equations are the basic convection equations in the Boussinesq approximation. Advantages The advantage of the approximation arises because when considering a flow of, say, warm and cold water of density and one needs only to consider a single density : the difference is negligible. Dimensional analysis shows that, under these circumstances, the only sensible way that acceleration due to gravity should enter into the equations of motion is in the reduced gravity where (Note that the denominator may be either density without affecting the result because the change would be of order .) The most generally used dimensionless number would be the Richardson number and Rayleigh number. The mathematics of the flow is therefore simpler because the density ratio , a dimensionless number, does not affect the flow; the Boussinesq approximation states that it may be assumed to be exactly one. Inversions One feature of Boussinesq flows is that they look the same when viewed upside-down, provided that the identities of the fluids are reversed. The Boussinesq approximation is inaccurate when the dimensionless density difference is approximately 1, i.e. . For example, consider an open window in a warm room. The warm air inside is less dense than the cold air outside, which flows into the room and down towards the floor. Now imagine the opposite: a cold room exposed to warm outside air. Here the air flowing in moves up toward the ceiling. If the flow is Boussinesq (and the room is otherwise symmetrical), then viewing the cold room upside down is exactly the same as viewing the warm room right-way-round. This is because the only way density enters the problem is via the reduced gravity which undergoes only a sign change when changing from the warm room flow to the cold room flow. An example of a non-Boussinesq flow is bubbles rising in water. The behaviour of air bubbles rising in water is very different from the behaviour of water falling in air: in the former case rising bubbles tend to form hemispherical shells, while water falling in air splits into raindrops (at small length scales surface tension enters the problem and confuses the issue). References Further reading Fluid dynamics Buoyancy
Boussinesq approximation (buoyancy)
[ "Chemistry", "Engineering" ]
939
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
394,464
https://en.wikipedia.org/wiki/Iron%E2%80%93sulfur%20world%20hypothesis
The iron–sulfur world hypothesis is a set of proposals for the origin of life and the early evolution of life advanced in a series of articles between 1988 and 1992 by Günter Wächtershäuser, a Munich patent lawyer with a degree in chemistry, who had been encouraged and supported by philosopher Karl R. Popper to publish his ideas. The hypothesis proposes that early life may have formed on the surface of iron sulfide minerals, hence the name. It was developed by retrodiction (making a "prediction" about the past) from extant biochemistry (non-extinct, surviving biochemistry) in conjunction with chemical experiments. Origin of life Pioneer organism Wächtershäuser proposes that the earliest form of life, termed the "pioneer organism", originated in a volcanic hydrothermal flow at high pressure and high (100 °C) temperature. It had a composite structure of a mineral base with catalytic transition metal centers (predominantly iron and nickel, but also perhaps cobalt, manganese, tungsten and zinc). The catalytic centers catalyzed autotrophic carbon fixation pathways generating small molecule (non-polymer) organic compounds from inorganic gases (e.g. carbon monoxide, carbon dioxide, hydrogen cyanide and hydrogen sulfide). These organic compounds were retained on or in the mineral base as organic ligands of the transition metal centers with a flow retention time in correspondence with their mineral bonding strength thereby defining an autocatalytic "surface metabolism". The catalytic transition metal centers became autocatalytic by being accelerated by their organic products turned ligands. The carbon fixation metabolism became autocatalytic by forming a metabolic cycle in the form of a primitive sulfur-dependent version of the reductive citric acid cycle. Accelerated catalysts expanded the metabolism and new metabolic products further accelerated the catalysts. The idea is that once such a primitive autocatalytic metabolism was established, its intrinsically synthetic chemistry began to produce ever more complex organic compounds, ever more complex pathways and ever more complex catalytic centers. Nutrient conversions The water gas shift reaction (CO + H2O → CO2 + H2) occurs in volcanic fluids with diverse catalysts or without catalysts. The combination of ferrous sulfide (FeS, troilite) and hydrogen sulfide () as reducing agents (both reagents are simultaneously oxidized in the reaction here under creating the disulfide bond, S–S) in conjunction with pyrite () formation: FeS + H2S → FeS2 + 2 H+ + 2 e− or with H2 directly produced instead of 2 H+ + 2 e− FeS + H2S → FeS2 + H2 has been demonstrated under mild volcanic conditions. This key result has been disputed. Nitrogen fixation has been demonstrated for the isotope 15N2 in conjunction with pyrite formation. Ammonia forms from nitrate with FeS/H2S as reductant. Methylmercaptan [CH3-SH] and carbon oxysulfide [COS] form from CO2 and FeS/H2S, or from CO and H2 in the presence of NiS. Synthetic reactions Reaction of carbon monoxide (CO), hydrogen sulfide (H2S) and methanethiol CH3SH in the presence of nickel sulfide and iron sulfide generates the methyl thioester of acetic acid [CH3-CO-SCH3] and presumably thioacetic acid (CH3-CO-SH) as the simplest activated acetic acid analogues of acetyl-CoA. These activated acetic acid derivatives serve as starting materials for subsequent exergonic synthetic steps. They also serve for energy coupling with endergonic reactions, notably the formation of (phospho)anhydride compounds. However, Huber and Wächtershäuser reported low 0.5% acetate yields based on the input of CH3SH (methanethiol) (8 mM) in the presence of 350 mM CO. This is about 500 times and 3700 times the highest CH3SH and CO concentrations respectively measured to date in a natural hydrothermal vent fluid. Reaction of nickel hydroxide with hydrogen cyanide (HCN) (in the presence or absence of ferrous hydroxide, hydrogen sulfide or methyl mercaptan) generates nickel cyanide, which reacts with carbon monoxide (CO) to generate pairs of α-hydroxy and α-amino acids: e.g. glycolate/glycine, lactate/alanine, glycerate/serine; as well as pyruvic acid in significant quantities. Pyruvic acid is also formed at high pressure and high temperature from CO, H2O, FeS in the presence of nonyl mercaptan. Reaction of pyruvic acid or other α-keto acids with ammonia in the presence of ferrous hydroxide or in the presence of ferrous sulfide and hydrogen sulfide generates alanine or other α-amino acids. Reaction of α-amino acids in aqueous solution with COS or with CO and H2S generates a peptide cycle wherein dipeptides, tripeptides etc. are formed and subsequently degraded via N-terminal hydantoin moieties and N-terminal urea moieties and subsequent cleavage of the N-terminal amino acid unit. Proposed reaction mechanism for reduction of CO2 on FeS: Ying et al. (2007) have shown that direct transformation of mackinawite (FeS) to pyrite (FeS2) on reaction with H2S till 300 °C is not possible without the presence of critical amount of oxidant. In the absence of any oxidant, FeS reacts with H2S up to 300 °C to give pyrrhotite. Farid et al. have experimentally shown that mackinawite (FeS) has ability to reduce CO2 to CO at temperature higher than 300 °C. They reported that the surface of FeS is oxidized, which on reaction with H2S gives pyrite (FeS2). It is expected that CO reacts with H2O in the Drobner experiment to give H2. Early evolution Early evolution is defined as beginning with the origin of life and ending with the last universal common ancestor (LUCA). According to the iron–sulfur world theory it covers a coevolution of cellular organization (cellularization), the genetic machinery and enzymatization of the metabolism. Cellularization Cellularization occurs in several stages. It may have begun with the formation of primitive lipids (e.g. fatty acids or isoprenoids) in the surface metabolism. These lipids accumulate on or in the mineral base. This lipophilizes the outer or inner surfaces of the mineral base, which promotes condensation reactions over hydrolytic reactions by lowering the activity of water and protons. In the next stage lipid membranes are formed. While still anchored to the mineral base they form a semi-cell bounded partly by the mineral base and partly by the membrane. Further lipid evolution leads to self-supporting lipid membranes and closed cells. The earliest closed cells are pre-cells (sensu Kandler) because they allow frequent exchange of genetic material (e.g. by fusions). According to Woese, this frequent exchange of genetic material is the cause for the existence of the common stem in the tree of life and for a very rapid early evolution. Nick Lane and coauthors state that "Non-enzymatic equivalents of glycolysis, the pentose phosphate pathway and gluconeogenesis have been identified as well. Multiple syntheses of amino acids from α-keto acids by direct reductive amination and by transamination reactions can also take place. Long-chain fatty acids can be formed by hydrothermal Fischer-Tropsch-type synthesis which chemically resembles the process of fatty acid elongation. Recent work suggests that nucleobases might also be formed following the universally conserved biosynthetic pathways, using metal ions as catalysts". Metabolic intermediates in glycolysis and the pentose phosphate pathway such as glucose, pyruvate, ribose 5-phosphate, and erythrose-4-phosphate are spontaneously generated in the presence of Fe(II). Fructose 1,6-biphosphate, a metabolic intermediate in gluconeogenesis, was shown to have been continuously accumulated but only in a frozen solution. The formation of fructose 1,6-biphosphate was accelerated by lysine and glycine which implies the earliest anabolic enzymes were amino acids. It had been reported that 4Fe-4S, 2Fe-2S, and mononuclear iron clusters are spontaneously formed in low concentrations of cysteine and alkaline pH. Methyl thioacetate, a precursor to acetyl-CoA can be synthesized in conditions relevant to hydrothermal vents. Phosphorylation of methyl thioacetate leads to the synthesis of thioacetate, a simpler precursor to acetyl-CoA. Thioacetate in more cooler and neutral conditions promotes synthesis of acetyl phosphate which is a precursor to adenosine triphosphate and is capable of phosphorylating ribose and nucleosides. This suggests that acetyl phosphate was likely synthesized in thermophoresis and mixing between the acidic seawater and alkaline hydrothermal fluid in interconnected micropores. It is possible that it could promote nucleotide polymerization at mineral surfaces or at low water activity. Thermophoresis at hydrothermal vent pores can concentrate polyribonucleotides, but it remains unknown as to how it could promote coding and metabolic reactions. In mathematical simulations, autocatalytic nucleotide synthesis is proposed to promote protocell growth as nucleotides also catalyze CO2 fixation. Strong nucleotide catalysis of fatty acids and amino acids slow down protocell growth and if competition between catalytic function were to occur, this would disrupt the protocell. Weak or moderate nucleotide catalysis of amino acids via CO2 fixation would favor protocell division and growth. In 2017, a computational simulation of a protocell at an alkaline hydrothermal vent environment showed that "Some hydrophobic amino acids chelate FeS nanocrystals, producing three positive feedbacks: (i) an increase in catalytic surface area; (ii) partitioning of FeS nanocrystals to the membrane; and (iii) a proton-motive active site for carbon fixing that mimics the enzyme Ech". Maximal ATP synthesis would have occurred at high water activity in freshwater and high concentrations of Mg2+ and Ca2+ prevented synthesis of ATP, however the concentrations of divalent cations in Hadean oceans were much lower than in modern oceans and alkaline hydrothermal vent concentrations of Mg2+ and Ca2+ are typically lower than in the ocean. Such environments would have generated Fe3+ which would have promoted ADP phosphorylation. The mixture of seawater and alkaline hydrothermal vent fluid can promote cycling between Fe3+ and Fe2+. Experimental research of biomimetic prebiotic reactions such as the reduction of NAD and phosphoryl transfer also support an origin of life occurring at an alkaline hydrothermal vent . Proto-ecological systems William Martin and Michael Russell suggest that the first cellular life forms may have evolved inside alkaline hydrothermal vents at seafloor spreading zones in the deep sea. These structures consist of microscale caverns that are coated by thin membraneous metal sulfide walls. Therefore, these structures would resolve several critical points germane to Wächtershäuser's suggestions at once: the micro-caverns provide a means of concentrating newly synthesised molecules, thereby increasing the chance of forming oligomers; the steep temperature gradients inside the hydrothermal vent allow for establishing "optimum zones" of partial reactions in different regions of the vent (e.g. monomer synthesis in the hotter, oligomerisation in the cooler parts); the flow of hydrothermal water through the structure provides a constant source of building blocks and energy (chemical disequilibrium between hydrothermal hydrogen and marine carbon dioxide); the model allows for a succession of different steps of cellular evolution (prebiotic chemistry, monomer and oligomer synthesis, peptide and protein synthesis, RNA world, ribonucleoprotein assembly and DNA world) in a single structure, facilitating exchange between all developmental stages; synthesis of lipids as a means of "closing" the cells against the environment is not necessary, until basically all cellular functions are developed. This model locates the "last universal common ancestor" (LUCA) within the inorganically formed physical confines of an alkaline hydrothermal vent, rather than assuming the existence of a free-living form of LUCA. The last evolutionary step en route to bona fide free-living cells would be the synthesis of a lipid membrane that finally allows the organisms to leave the microcavern system of the vent. This postulated late acquisition of the biosynthesis of lipids as directed by genetically encoded peptides is consistent with the presence of completely different types of membrane lipids in archaea and bacteria (plus eukaryotes). The kind of vent at the foreground of their suggestion is chemically more similar to the warm (ca. 100 °C) off ridge vents such as Lost City than to the more familiar black smoker type vents (ca. 350 °C). In an abiotic world, a thermocline of temperatures and a chemocline in concentration is associated with the pre-biotic synthesis of organic molecules, hotter in proximity to the chemically rich vent, cooler but also less chemically rich at greater distances. The migration of synthesized compounds from areas of high concentration to areas of low concentration gives a directionality that provides both source and sink in a self-organizing fashion, enabling a proto-metabolic process by which acetic acid production and its eventual oxidization can be spatially organized. In this way many of the individual reactions that are today found in central metabolism could initially have occurred independent of any developing cell membrane. Each vent microcompartment is functionally equivalent to a single cell. Chemical communities having greater structural integrity and resilience to wildly fluctuating conditions are then selected for; their success would lead to local zones of depletion for important precursor chemicals. Progressive incorporation of these precursor components within a cell membrane would gradually increase metabolic complexity within the cell membrane, whilst leading to greater environmental simplicity in the external environment. In principle, this could lead to the development of complex catalytic sets capable of self-maintenance. Russell adds a significant factor to these ideas, by pointing out that semi-permeable mackinawite (an iron sulfide mineral) and silicate membranes could naturally develop under these conditions and electrochemically link reactions separated in space, if not in time. Alternative environment The 6 of the 11 metabolic intermediates in reverse Krebs cycle promoted by Fe, Zn2+, and Cr3+ in acidic conditions imply that protocells possibly emerged in locally metal-rich and acidic terrestrial hydrothermal fields. The acidic conditions are seemingly consistent with the stabilization of RNA. These hydrothermal fields would have exhibited cycling of freezing and thawing and a variety of temperature gradients that would promote nonenzymatic reactions of gluconeogenesis, nucleobase synthesis, nonenzymatic polymerization, and RNA replication. ATP synthesis and oxidation of ferrous iron via photochemical reactions or oxidants such as nitric oxide derived from lightning strikes, meteorite impacts, or volcanic emissions could have also occurred at hydrothermal fields. Wet-dry cycling of hydrothermal fields would polymerize RNA and peptides, protocell aggregation in a moist gel phase during wet-dry cycling would allow diffusion of metabolic products across neighboring protocells. Protocell aggregation could be described as a primitive version of horizontal gene transfer. Fatty acid vesicles would be stabilized by polymers in the presence of Mg2+ required for ribozyme activity. These prebiotic processes might have occurred in shaded areas that protect the emergence of early cellular life under ultraviolet irradiation. Long chain alcohols and monocarboxylic acids would have also been synthesized via Fischer–Tropsch synthesis. Hydrothermal fields would also have precipitates of transition metals and concentrated many elements including CHNOPS. Geothermal convection could also be a source of energy for the emergence of the proton motive force, phosphoryl group transfer, coupling between oxidation-reduction, and active transport. It's noted by David Deamer and Bruce Damer that these environments seemingly resemble Charles Darwin's idea of a "warm little pond". The problems with the hypothesis of a subaerial hydrothermal field of abiogenesis is that the proposed chemistry doesn't resemble known biochemical reactions. The abundance of subaerial hydrothermal fields would have been rare and offered no protection from either meteorites or ultraviolet irradiation. Clay minerals at subaerial hydrothermal fields would absorb organic reactants. Pyrophosphate has low solubility in water and can't be phosphorylated without a phosphorylating agent. It doesn't offer explanations for the origin of chemiosmosis and differences between Archaea and Bacteria. See also Abiogenesis Iron–sulfur protein RNA world RNP world Miller–Urey experiment References Origin of life Metabolism de:Chemische Evolution#Die Eisen-Schwefel-Welt (ESW) nach Wächtershäuser
Iron–sulfur world hypothesis
[ "Chemistry", "Biology" ]
3,689
[ "Origin of life", "Biochemistry", "Cellular processes", "Biological hypotheses", "Metabolism" ]
394,520
https://en.wikipedia.org/wiki/Deutscher%20Werkbund
The Deutscher Werkbund (; ) is a German association of artists, architects, designers and industrialists established in 1907. The Werkbund became an important element in the development of modern architecture and industrial design, particularly in the later creation of the Bauhaus school of design. Its initial purpose was to establish a partnership of product manufacturers with design professionals to improve the competitiveness of German companies in global markets. The Werkbund was less an artistic movement than a state-sponsored effort to integrate traditional crafts and industrial mass production techniques, to put Germany on a competitive footing with England and the United States. Its motto Vom Sofakissen zum Städtebau (from sofa cushions to city-building) indicates its range of interest. History The Deutscher Werkbund emerged when the architect Joseph Maria Olbrich left Vienna for Darmstadt, Germany, in 1899, to form an artists' colony at the invitation of Ernest Louis, Grand Duke of Hesse. The Werkbund was founded by Olbrich, Peter Behrens, Richard Riemerschmid, Bruno Paul and others in 1907 in Munich at the instigation of Hermann Muthesius, existed through 1934, then re-established after World War II in 1950. Muthesius was the author of the exhaustive three-volume "The English House" of 1905, a survey of the practical lessons of the English Arts and Crafts movement. Muthesius was seen as something of a cultural ambassador, or industrial spy, between Germany and England. The organization originally included twelve architects and twelve business firms. The architects include Peter Behrens, Theodor Fischer (who served as its first president), Josef Hoffmann, Bruno Paul, Max Laeuger and Richard Riemerschmid. Other architects affiliated with the project include Heinrich Tessenow and the Belgian Henry van de Velde. By 1914, it had 1,870 members, including heads of museums. The Werkbund commissioned van de Velde to design a theater for the 1914 Werkbund Exhibition in Cologne. The exhibition was closed and the buildings dismantled ahead of schedule because of the outbreak of World War I. Eliel Saarinen was made corresponding member of the Deutscher Werkbund in 1914 and was invited to participate in the 1914 Cologne exhibition. Among the Werkbund's more noted members was the architect Mies van der Rohe, who served as Architectural Director. Key dates of the Deutscher Werkbund 1907, Establishment of the Werkbund in Munich 1910, Salon d'Automne, Paris 1914, Werkbund Exhibition, Cologne 1920, Lilly Reich becomes the first female Director 1924, Berlin exhibition 1927, Stuttgart exhibition (including the Weissenhof Estate) 1929, Breslau exhibition 1934, Werkbund declare dissolution 1947, Reestablishment 100th anniversary The Verband Deutscher Industrie Designer (Association of German Industrial Designers, or VDID) and the Bund Deutscher Grafik-Designer (Federation of German Graphic Designers, or "BDG-Mitte") held a joint meeting to celebrate the 100th anniversary of the Deutscher Werkbund. A juried exhibition and opening was held on 14 March 2008. Museum der Dinge The collections and archives (Werkbundarchiv) of the Werkbund are housed at the Museum der Dinge (Museum of Things) in Berlin. The museum is focused on design and objects used in everyday life in the 20th century up to the present. Among other exhibits, it includes a Frankfurt kitchen. Members Konrad Adenauer Friedrich Adler Adolf Arndt Anker-Werke Delmenhorst Ferdinand Avenarius Otto Bartning Willi Baumeister Adolf Behne Hendrik Petrus Berlage Richard Berndl Johann Michael Bossard Raymund Brachmann Fritz August Breuhaus de Groot Bazon Brock Ulrich Böhme Max Burchartz Charles Crodel Carl Otto Czeschka Wilhelm von Debschitz Franz Karl Delavilla Peter A. Demeter Walter Dexel Eugen Diederichs Bruno Dörpinghaus Karl Duschek Adolph Eckhardt Egon Eiermann Albert Eitel August Endell Jupp Ernst Lyonel Feininger Wend Fischer Karl Ganser Hansjörg Göritz Hermann Gretsch Walter Gropius Moritz Hadda Richard Hamann Luise Harkort Hugo Häring Hans Heckner Max Heidrich Erwin Heerich Hans Hertlein Max Hertwig Lucy Hillebrand Georg Hirth Theodor Heuss Ot Hoffmann Helmut Hofmann Ferdy Horrmeyer Paul Horst-Schulze Klaus Humpert Walter Maria Kersting Harald Kimpel Moissey Kogan Hans P. Koellmann Ludwig König Ernst Kühn Hugo Kükelhaus Klaus Küster Ferdinand Kramer Günter Kupetz Emil Lange Carl Langhein Josef Lehmbrock El Lissitzky Johannes Ludovicus Mathieu Lauweriks Richard Luksch Gerhard Marcks Ewald Mataré Ernst May Kunstmuseen Krefeld Erich Mendelsohn Wolfgang Meisenheimer Georg Metzendorf Mies van der Rohe Leberecht Migge Anna Muthesius Hermann Muthesius Friedrich Naumann Walter Neuhäusser Hans Neumann Else Oppler-Legband Karl Ernst Osthaus Ludwig Paffendorf Bernhard Pankok Karl Poser Walfried Pohl Jan Thorn Prikker Peter Raacke Adolf Rading Jochen Rahe Dieter Rams Walther Rathenau Carl Rehorst Lilly Reich Albert Reimann Albert Renger-Patzsch Paul Renner Richard Riemerschmid Alexander Michailowitsch Rodtschenko Gregor Rosenbauer Walter Rossow Werner Ruhnau Hans Scharoun Karl Schmidt-Hellerau Willy Schönefeld Werner Schriefers Rudolf Alexander Schröder Reinhard Schulze Fritz Schupp Margarete Schütte-Lihotzky Walter Schwagenscheidt Rudolf Schwarz Hans Schwippert Ferdinand Selle Bernd Sikora Anna Simons Carl Sonntag jun. Friedrich Spengelin Bernhard Stadler Anton Stankowski Heinz Stoffregen Ludwig Sütterlin Heinrich Straumer Gustav Stresemann Bruno Taut Heinrich Tessenow Paul Thiersch Emil Thormählen Walter Tiemann Paul Ludwig Troost Otto Ubbelohde Henry van de Velde Theodor Veil Otto Voelckers Heinrich Vogeler Fritz Wärndorfer Wilhelm Wagenfeld Otto Wagner Udo Weilacher Werkbund Werkstatt Nürnberg Edward Weston Alfred Wiener Karl With Dieter Witte Georg Wrba Christoph Zöpel Berta Zuckerkandl See also New Objectivity (architecture) Modern architecture WUWA (Breslau) References Further reading Lucius Burckhardt (1987). The Werkbund. Hyperion Press. Frederic J. Schwartz (1996). The Werkbund: Design Theory and Mass Culture Before the First World War. New Haven, Conn. : Yale University Press. Mark Jarzombek. "Joseph August Lux: Werkbund Promoter, Historian of a Lost Modernity," Journal of the Society of Architectural Historians 63/1 (June 2004): 202–219. Ot Hoffmann im Auftrag des DWB: Der Deutsche Werkbund – 1907, 1947, 1987. Wilhelm Ernst & Sohn, Frankfurt 1987, . Yuko Ikeda: Vom Sofakissen zum Städtebau. Hermann Muthesius und der Deutsche Werkbund. Modern Design in Deutschland 1900–1927. Ausstellungskatalog. The National Museum of Modern Art, Kyoto 2002, . Karl-Ernst-Osthaus-Museum Hagen und Kaiser-Wilhelm-Museum Krefeld: Das Schöne und der Alltag – Deutsches Museum für Kunst in Handel und Gewerbe. Ausstellungskatalog. Pandora Snoeck-Ducaju & Zoon, Gent 1997, . External links Werkbundarchiv: Museum der Dinge – official site Bauhaus 1907 establishments in Germany Industrial design Graphic design Modernist architecture in Germany Architecture groups
Deutscher Werkbund
[ "Engineering" ]
1,659
[ "Industrial design", "Design engineering", "Design" ]
394,765
https://en.wikipedia.org/wiki/Yeast%20artificial%20chromosome
Yeast artificial chromosomes (YACs) are genetically engineered chromosomes derived from the DNA of the yeast, Saccharomyces cerevisiae , which is then ligated into a bacterial plasmid. By inserting large fragments of DNA, from 100–1000 kb, the inserted sequences can be cloned and physically mapped using a process called chromosome walking. This is the process that was initially used for the Human Genome Project, however due to stability issues, YACs were abandoned for the use of bacterial artificial chromosome The bakers' yeast S. cerevisiae is one of the most important experimental organisms for studying eukaryotic molecular genetics. Beginning with the initial research of the Rankin et al., Strul et al., and Hsaio et al., the inherently fragile chromosome was stabilized by discovering the necessary autonomously replicating sequence (ARS); a refined YAC utilizing this data was described in 1983 by Murray et al. The primary components of a YAC are the ARS, centromere , and telomeres from S. cerevisiae. Additionally, selectable marker genes, such as antibiotic resistance and a visible marker, are utilized to select transformed yeast cells. Without these sequences, the chromosome will not be stable during extracellular replication, and would not be distinguishable from colonies without the vector. Construction A YAC is built using an initial circular DNA plasmid, which is typically cut into a linear DNA molecule using restriction enzymes; DNA ligase is then used to ligate a DNA sequence or gene of interest into the linearized DNA, forming a single large, circular piece of DNA. The basic generation of linear yeast artificial chromosomes can be broken down into 6 main steps: Full chromosome III Chromosome III is the third smallest chromosome in S. cerevisiae; its size was estimated from pulsed-field gel electro- phoresis studies to be 300–360 kb This chromosome has been the subject of intensive study, not least because it contains the three genetic loci involved in mating-type control: MAT, HML and HMR. In March 2014, Jef Boeke of the Langone Medical Centre at New York University, published that his team has synthesized one of the S. cerevisiae 16 yeast chromosomes, the chromosome III, that he named synIII. The procedure involved replacing the genes in the original chromosome with synthetic versions and the finished synthesized chromosome was then integrated into a yeast cell. It required designing and creating 273,871 base pairs of DNA - fewer than the 316,667 pairs in the original chromosome. Uses in biotechnology Yeast expression vectors, such as YACs, YIps (yeast integrating plasmids), and YEps (yeast episomal plasmids), have an advantage over bacterial artificial chromosomes (BACs) in that they can be used to express eukaryotic proteins that require posttranslational modification. By being able to insert large fragments of DNA, YACs can be utilized to clone and assemble the entire genomes of an organism. With the insertion of a YAC into yeast cells, they can be propagated as linear artificial chromosomes, cloning the inserted regions of DNA in the process. With this completed, two processes can be used to obtain a sequenced genome, or region of interest: Physical Mapping Chromosome Walking This is significant in that it allows for the detailed mapping of specific regions of the genome. Whole human chromosomes have been examined, such as the X chromosome, generating the location of genetic markers for numerous genetic disorders and traits. The Human Genome Project In the United States, the Human Genome Project first took clear form in February of 1988, with the release of the National Research Council (NRC) report Mapping and Sequencing the Human Genome. YACs are significantly less stable than BACs, producing "chimeric effects" : artifacts where the sequence of the cloned DNA actually corresponds not to a single genomic region but to multiple regions. Chimerism may be due to either co-ligation of multiple genomic segments into a single YAC, or recombination of two or more YACs transformed in the same host Yeast cell. The incidence of chimerism may be as high as 50%. Other artifacts are deletion of segments from a cloned region, and rearrangement of genomic segments (such as inversion). In all these cases, the sequence as determined from the YAC clone is different from the original, natural sequence, leading to inconsistent results and errors in interpretation if the clone's information is relied upon. Due to these issues, the Human Genome Project ultimately abandoned the use of YACs and switched to bacterial artificial chromosomes, where the incidence of these artifacts is very low. In addition to stability issues, specifically the relatively frequent occurrence of chimeric events, YACs proved to be inefficient when generating the minimum tiling path covering the entire human genome. Generating the clone libraries is time consuming. Also, due to the nature of the reliance on sequence tagged sites (STS) as a reference point when selecting appropriate clones, there are large gaps that need further generation of libraries to span. It is this additional hindrance that drove the project to utilize BACs instead. This is due to two factors: BACs are much quicker to generate, and when generating redundant libraries of clones, this is essential BACs allow more dense coverage with STSs, resulting in more complete and efficient minimum tiling paths generated in silico. However, it is possible to utilize both approaches, as was demonstrated when the genome of the nematode, C. elegans. There majority of the genome was tiled with BACs, and the gaps filled in with YACs. See also Bacterial artificial chromosome (BAC) Cosmid Fosmid Genetic engineering Human artificial chromosome Autonomously replicating sequence(ARS) Cloning Vector References External links North Dakota State University Cloning and Cloning Vectors Resource Molecular Cell Biology 4th Edition [NCBI Database]: DNA Cloning with Plasmid Vectors, Ch. 7 Washington University Genome Institute Saccharomyces Genome Database Molecular biology
Yeast artificial chromosome
[ "Chemistry", "Biology" ]
1,250
[ "Biochemistry", "Molecular biology" ]
395,167
https://en.wikipedia.org/wiki/Froude%20number
In continuum mechanics, the Froude number (, after William Froude, ) is a dimensionless number defined as the ratio of the flow inertia to the external force field (the latter in many applications simply due to gravity). The Froude number is based on the speed–length ratio which he defined as: where is the local flow velocity (in m/s), is the local gravity field (in m/s2), and is a characteristic length (in m). The Froude number has some analogy with the Mach number. In theoretical fluid dynamics the Froude number is not frequently considered since usually the equations are considered in the high Froude limit of negligible external field, leading to homogeneous equations that preserve the mathematical aspects. For example, homogeneous Euler equations are conservation equations. However, in naval architecture the Froude number is a significant figure used to determine the resistance of a partially submerged object moving through water. Origins In open channel flows, introduced first the ratio of the flow velocity to the square root of the gravity acceleration times the flow depth. When the ratio was less than unity, the flow behaved like a fluvial motion (i.e., subcritical flow), and like a torrential flow motion when the ratio was greater than unity. Quantifying resistance of floating objects is generally credited to William Froude, who used a series of scale models to measure the resistance each model offered when towed at a given speed. The naval constructor Frederic Reech had put forward the concept much earlier in 1852 for testing ships and propellers but Froude was unaware of it. Speed–length ratio was originally defined by Froude in his Law of Comparison in 1868 in dimensional terms as: where: = flow speed = length of waterline The term was converted into non-dimensional terms and was given Froude's name in recognition of the work he did. In France, it is sometimes called Reech–Froude number after Frederic Reech. Definition and main application To show how the Froude number is linked to general continuum mechanics and not only to hydrodynamics we start from the Cauchy momentum equation in its dimensionless (nondimensional) form. Cauchy momentum equation In order to make the equations dimensionless, a characteristic length r0, and a characteristic velocity u0, need to be defined. These should be chosen such that the dimensionless variables are all of order one. The following dimensionless variables are thus obtained: Substitution of these inverse relations in the Euler momentum equations, and definition of the Froude number: and the Euler number: the equations are finally expressed (with the material derivative and now omitting the indexes): Cauchy-type equations in the high Froude limit (corresponding to negligible external field) are named free equations. On the other hand, in the low Euler limit (corresponding to negligible stress) general Cauchy momentum equation becomes an inhomogeneous Burgers equation (here we make explicit the material derivative): This is an inhomogeneous pure advection equation, as much as the Stokes equation is a pure diffusion equation. Euler momentum equation Euler momentum equation is a Cauchy momentum equation with the Pascal law being the stress constitutive relation: in nondimensional Lagrangian form is: Free Euler equations are conservative. The limit of high Froude numbers (low external field) is thus notable and can be studied with perturbation theory. Incompressible Navier–Stokes momentum equation Incompressible Navier–Stokes momentum equation is a Cauchy momentum equation with the Pascal law and Stokes's law being the stress constitutive relations: in nondimensional convective form it is: where is the Reynolds number. Free Navier–Stokes equations are dissipative (non conservative). Other applications Ship hydrodynamics In marine hydrodynamic applications, the Froude number is usually referenced with the notation and is defined as: where is the relative flow velocity between the sea and ship, is in particular the acceleration due to gravity, and is the length of the ship at the water line level, or in some notations. It is an important parameter with respect to the ship's drag, or resistance, especially in terms of wave making resistance. In the case of planing craft, where the waterline length is too speed-dependent to be meaningful, the Froude number is best defined as displacement Froude number and the reference length is taken as the cubic root of the volumetric displacement of the hull: Shallow water waves For shallow water waves, such as tsunamis and hydraulic jumps, the characteristic velocity is the average flow velocity, averaged over the cross-section perpendicular to the flow direction. The wave velocity, termed celerity , is equal to the square root of gravitational acceleration , times cross-sectional area , divided by free-surface width : so the Froude number in shallow water is: For rectangular cross-sections with uniform depth , the Froude number can be simplified to: For the flow is called a subcritical flow, further for the flow is characterised as supercritical flow. When the flow is denoted as critical flow. Wind engineering When considering wind effects on dynamically sensitive structures such as suspension bridges it is sometimes necessary to simulate the combined effect of the vibrating mass of the structure with the fluctuating force of the wind. In such cases, the Froude number should be respected. Similarly, when simulating hot smoke plumes combined with natural wind, Froude number scaling is necessary to maintain the correct balance between buoyancy forces and the momentum of the wind. Allometry The Froude number has also been applied in allometry to studying the locomotion of terrestrial animals, including antelope and dinosaurs. Extended Froude number Geophysical mass flows such as avalanches and debris flows take place on inclined slopes which then merge into gentle and flat run-out zones. So, these flows are associated with the elevation of the topographic slopes that induce the gravity potential energy together with the pressure potential energy during the flow. Therefore, the classical Froude number should include this additional effect. For such a situation, Froude number needs to be re-defined. The extended Froude number is defined as the ratio between the kinetic and the potential energy: where is the mean flow velocity, , ( is the earth pressure coefficient, is the slope), , is the channel downslope position and is the distance from the point of the mass release along the channel to the point where the flow hits the horizontal reference datum; and are the pressure potential and gravity potential energies, respectively. In the classical definition of the shallow-water or granular flow Froude number, the potential energy associated with the surface elevation, , is not considered. The extended Froude number differs substantially from the classical Froude number for higher surface elevations. The term emerges from the change of the geometry of the moving mass along the slope. Dimensional analysis suggests that for shallow flows , while and are both of order unity. If the mass is shallow with a virtually bed-parallel free-surface, then can be disregarded. In this situation, if the gravity potential is not taken into account, then is unbounded even though the kinetic energy is bounded. So, formally considering the additional contribution due to the gravitational potential energy, the singularity in Fr is removed. Stirred tanks In the study of stirred tanks, the Froude number governs the formation of surface vortices. Since the impeller tip velocity is (circular motion), where is the impeller frequency (usually in rpm) and is the impeller radius (in engineering the diameter is much more frequently employed), the Froude number then takes the following form: The Froude number finds also a similar application in powder mixers. It will indeed be used to determine in which mixing regime the blender is working. If Fr<1, the particles are just stirred, but if Fr>1, centrifugal forces applied to the powder overcome gravity and the bed of particles becomes fluidized, at least in some part of the blender, promoting mixing Densimetric Froude number When used in the context of the Boussinesq approximation the densimetric Froude number is defined as where is the reduced gravity: The densimetric Froude number is usually preferred by modellers who wish to nondimensionalize a speed preference to the Richardson number which is more commonly encountered when considering stratified shear layers. For example, the leading edge of a gravity current moves with a front Froude number of about unity. Walking Froude number The Froude number may be used to study trends in animal gait patterns. In analyses of the dynamics of legged locomotion, a walking limb is often modeled as an inverted pendulum, where the center of mass goes through a circular arc centered at the foot. The Froude number is the ratio of the centripetal force around the center of motion, the foot, and the weight of the animal walking: where is the mass, is the characteristic length, is the acceleration due to gravity and is the velocity. The characteristic length may be chosen to suit the study at hand. For instance, some studies have used the vertical distance of the hip joint from the ground, while others have used total leg length. The Froude number may also be calculated from the stride frequency as follows: If total leg length is used as the characteristic length, then the theoretical maximum speed of walking has a Froude number of 1.0 since any higher value would result in takeoff and the foot missing the ground. The typical transition speed from bipedal walking to running occurs with . R. M. Alexander found that animals of different sizes and masses travelling at different speeds, but with the same Froude number, consistently exhibit similar gaits. This study found that animals typically switch from an amble to a symmetric running gait (e.g., a trot or pace) around a Froude number of 1.0. A preference for asymmetric gaits (e.g., a canter, transverse gallop, rotary gallop, bound, or pronk) was observed at Froude numbers between 2.0 and 3.0. Usage The Froude number is used to compare the wave making resistance between bodies of various sizes and shapes. In free-surface flow, the nature of the flow (supercritical or subcritical) depends upon whether the Froude number is greater than or less than unity. One can easily see the line of "critical" flow in a kitchen or bathroom sink. Leave it unplugged and let the faucet run. Near the place where the stream of water hits the sink, the flow is supercritical. It 'hugs' the surface and moves quickly. On the outer edge of the flow pattern the flow is subcritical. This flow is thicker and moves more slowly. The boundary between the two areas is called a "hydraulic jump". The jump starts where the flow is just critical and Froude number is equal to 1.0. The Froude number has been used to study trends in animal locomotion in order to better understand why animals use different gait patterns as well as to form hypotheses about the gaits of extinct species. In addition particle bed behavior can be quantified by Froude number (Fr) in order to establish the optimum operating window. See also Notes References External links https://web.archive.org/web/20070927085042/http://www.qub.ac.uk/waves/fastferry/reference/MCA457.pdf Dimensionless numbers of fluid mechanics Fluid dynamics Naval architecture
Froude number
[ "Chemistry", "Engineering" ]
2,471
[ "Naval architecture", "Chemical engineering", "Marine engineering", "Piping", "Fluid dynamics" ]
395,248
https://en.wikipedia.org/wiki/Nabla%20symbol
∇ The nabla symbol The nabla is a triangular symbol resembling an inverted Greek delta: or ∇. The name comes, by reason of the symbol's shape, from the Hellenistic Greek word for a Phoenician harp, and was suggested by the encyclopedist William Robertson Smith in an 1870 letter to Peter Guthrie Tait. The nabla symbol is available in standard HTML as &nabla; and in LaTeX as \nabla. In Unicode, it is the character at code point U+2207, or 8711 in decimal notation, in the Mathematical Operators block. As an operator, it is often called del. History The differential operator given in Cartesian coordinates on three-dimensional Euclidean space by was introduced in 1831 by the Irish mathematician and physicist William Rowan Hamilton, who called it ◁. (The unit vectors were originally right versors in Hamilton's quaternions.) The mathematics of ∇ received its full exposition at the hands of P. G. Tait. After receiving Smith's suggestion, Tait and James Clerk Maxwell referred to the operator as nabla in their extensive private correspondence; most of these references are of a humorous character. C. G. Knott's Life and Scientific Work of Peter Guthrie Tait (p. 145): It was probably this reluctance on the part of Maxwell to use the term Nabla in serious writings which prevented Tait from introducing the word earlier than he did. The one published use of the word by Maxwell is in the title to his humorous Tyndallic Ode, which is dedicated to the "Chief Musician upon Nabla", that is, Tait. William Thomson (Lord Kelvin) introduced the term to an American audience in an 1884 lecture; the notes were published in Britain and the U.S. in 1904. The name is acknowledged, and criticized, by Oliver Heaviside in 1891: The fictitious vector ∇ given by is very important. Physical mathematics is very largely the mathematics of ∇. The name Nabla seems, therefore, ludicrously inefficient. Heaviside and Josiah Willard Gibbs (independently) are credited with the development of the version of vector calculus most popular today. The influential 1901 text Vector Analysis, written by Edwin Bidwell Wilson and based on the lectures of Gibbs, advocates the name "del":This symbolic operator ∇ was introduced by Sir W. R. Hamilton and is now in universal employment. There seems, however, to be no universally recognized name for it, although owing to the frequent occurrence of the symbol some name is a practical necessity. It has been found by experience that the monosyllable del is so short and easy to pronounce that even in complicated formulae in which ∇ occurs a number of times, no inconvenience to the speaker or listener arises from the repetition. ∇V is read simply as "del V". This book is responsible for the form in which the mathematics of the operator in question is now usually expressed—most notably in undergraduate physics, and especially electrodynamics, textbooks. Modern uses The nabla is used in vector calculus as part of three distinct differential operators: the gradient (∇), the divergence (∇⋅), and the curl (∇×). The last of these uses the cross product and thus makes sense only in three dimensions; the first two are fully general. They were all originally studied in the context of the classical theory of electromagnetism, and contemporary university physics curricula typically treat the material using approximately the concepts and notation found in Gibbs and Wilson's Vector Analysis. The symbol is also used in differential geometry to denote a connection. A symbol of the same form, though presumably not genealogically related, appears in other areas, e.g.: As the all relation, particularly in lattice theory. As the backward difference operator, in the calculus of finite differences. As the widening operator, an operator that permits static analysis of programs to terminate in finite time, in the computer science field of abstract interpretation. As function definition marker and self-reference (recursion) in the APL programming language As an indicator of indeterminacy in philosophical logic. In naval architecture (ship design), to designate the volume displacement of a ship or any other waterborne vessel; the graphically similar delta is used to designate weight displacement (the total weight of water displaced by the ship), thus where is the density of seawater. See also Del, treating the mathematics of the vector differential operator Del in cylindrical and spherical coordinates grad, div, and curl, differential operators defined using nabla History of quaternions Notation for differentiation Covariant derivative, also known as connection Nevel Footnotes External links Tai, Chen. A survey of the improper use of ∇ in vector analysis (1994). Mathematical symbols Differential operators William Rowan Hamilton
Nabla symbol
[ "Mathematics" ]
995
[ "Mathematical analysis", "Symbols", "Differential operators", "Mathematical symbols" ]